UK COMPUTING RESEARCH GRAND CHALLENGES |
||
Machines for
intelligent ghosts? |
Controversies in Cognitive Systems Research |
This 'controversies' website started as a portion of that one, but soon grew large enough to be separated off. It will probably continue to grow indefinitely, and suggestions and criticisms are welcome. The location will shortly move to the euCognition web site, where a Controversies in Cognitive Systems Research section has been added to the euCognition wiki. This will be jointly edited by David Vernon and Aaron Sloman, to whom suggestions and criticisms should be sent.
Introduction
Many controversies are associated with GC-5 research (Defined
above)
Although it is hoped that researchers can in principle agree on the
kinds of functionality that will need to be explained, it is not expected
that they will agree initially on theories and concepts to be used or on
which sorts of mechanisms and architectures are likely to work. However if
they can collaborate for a few years on the task of specifying what
needs to be explained (because much of it can be based on
observation of humans and other animals, in various physical and social
environments) that may be able to lay the foundation for eventual
agreement on tests against which theories and models can be evaluated.
What follows is a provisional and incomplete list of topics on which there is disagreement that might one day be resolved (or reduced) by starting from agreed requirements (at least in the form of well organised, properly documented, analyses of competences that need to be modelled, explained or replicated in robots), in no particular order:
It is arguable that disputes about how to define a particular widely used word or phrase are pointless. The more important task is to get clear about the logical geography and logical topography surrounding and underlying the space of alternative definitions.
One of the common criticisms of computational neural net models is that they grossly oversimplify the ways in which real neurones work, ignoring the diversity of types of neurones the variety of kinds of organisations of networks, and the role of chemical processes involving neurotransmitters and hormones, for instance the role of opioids in learning claimed by Biederman and colleagues in Perceptual Pleasure and the Brain, Irving Biederman, Edward A. Vessel in American Scientist, 94, 3, May-June 2006. (Also accessible as html here.)
For more on Concept Empiricism, see: a recent paper by a philosopher attacking concept empiricism (PDF) Concept Empiricism: A Methodological Criticism (to appear in Cognition) by Edouard Machery, University of Pittsburgh Department of History and Philosophy of ScienceFor a recent defence of Concept Empiricism see The Return of Concept Empiricism (PDF) [Penultimate draft of chapter in H. Cohen and C. Leferbvre (Eds.) Categorization and Cognitive Science, Elsevier (forthcoming). by Jesse J. Prinz Department of Philosophy, University of North Carolina at Chapel Hill
A currently popular view is expressed thus by David Cliff'Common to all of these new approaches was the observation that many naturally-occurring systems, at one level of analysis, can be described as being built from components that are individually "simple" and that interact with each other in relatively "simple" ways; yet at another level of analysis these systems exhibit some "complex" overall behaviour that is not readily predictable from the individual components.'Biologically-Inspired Computing Approaches To Cognitive Systems: a partial tour of the literature Dave Cliff, Hewlett Packard Research Laboratory. HPL-2003-11 2003
'We see computer vision--or just "vision"; apologies to those who study human or animal vision--as an enterprise that uses statistical methods to disentangle data using models constructed with the aid of geometry, physics and learning theory.'
Some aspects of this controversy are befuddled by the fact that many people start from historical claims that are at least debatable. E.g. there is an online introduction to neural nets which offers a very clear example of such a characterisation of Symbolic AI, listing arguments that are frequently presented, but which are mistaken.
The quotation is from this section of Chapter 1 of an early online draft of Kevin Gurney's Introduction to Neural Networks Published by Routledge in 1997. The section is quoted with the author's kind permission. The final version of this discussion in Chapter 11 of the published book proposes that both symbolic and neural mechanisms are needed because the different mechanisms serve different functions.Responses to the statements in the quoted text have been inserted [in square brackets].
(Symbolic AI assumes that) it is possible to process strings of symbols which obey the rules of some formal system and which are interpreted (by humans) as 'ideas' or 'concepts'.There are some researchers who attempt to integrate both the concerns of theories about symbolic mental processes and sub-symbolic neural and other processes. Examples include[Many AI programs, even in the 1960s, made use of information both in the form of input (e.g. image features) and in intermediate structures that had no correspondence with human ideas or concepts.]It was the hope of the AI programme that all knowledge could be formalised in such a way: that is, it could be reduced to the manipulation of symbols according to rules and this manipulation implemented on a von Neumann machine (conventional computer).[It was assumed that this was how researchers would implement their theories. When the programs ran they could model many different kinds of computation, including things that are very different from von Neumann machines, including theorem provers, pattern matchers, spreading activation networks, production system interpreters, feature detectors operating on images or other forms of input.Likewise the vast majority of neural net researchers write their programs to run on von Neumann machines but when they run the programs implement non-VN virtual machines.]
We may draw up a list of the essential characteristics of such machines for comparison with those of networks.
- The machine must be told in advance, and in great detail, the exact series of steps required to perform the algorithm. This series of steps is the computer program.
[This completely ignores work on planners, theorem provers, and learning systems which work out for themselves what to do.]- The type of data it deals with has to be in a precise format - noisy data confuses the machine.
[There was a lot of AI research even on noisy images and speech and various techniques were developed for dealing with noise, e.g. use of hough transforms.]- The hardware is easily degraded - destroy a few key memory locations and the machine will stop functioning or 'crash'.
[A neural net program will also crash if you destroy memory locations where the compiled code resides or where the data-structures used by the compiled code are located. However that fragile implementation may support a very robust virtual machine. Similarly many AI systems which are not implemented as linear algorithms but as collections of interacting components, e.g. rule-based systems, can be very robust.
(E.g. Nilsson's work on teleoreactive systems. These constantly sense the environment and modify their current plans and goals accordingly.)
Of course, all existing systems have limitations and flaws of various kinds. There is no evidence that any one approach solves all the problems. ]- There is a clear correspondence between the semantic objects being dealt with (numbers, words, database entries etc) and the machine hardware. Each object can be 'pointed to' in a block of computer memory.
[This completely ignores the use of virtual machines containing items that do not have any specific location in physical memory. That important fact was also ignored by Newell and Simon, who generated much confusion by their 'Physical symbol system' hypothesis. They failed to make clear that they were talking about physically implemented virtual machine symbols, not about physical symbols.]
A different sort of answer proposes that we consider information as being a kind of basic stuff in the universe, like, matter (of various forms), energy (of various forms), none of which is explicitly definable in terms of simpler, or less obscure concepts, though each of them is implicitly defined by their roles in the best explanatory and predictive theories available (now and in the future). That answer is elaborated here.
That seems to be the answer implicitly assumed by many psychologists, neuroscientist, biologists and engineers who talk about information including Eva Jablonka & Marion J. Lamb in Evolution in Four Dimensions: Genetic, Epigenetic, Behavioral, and Symbolic Variation in the History of Life MIT Press, 2006. Selected for BBS book commentary, with precis here.
Various researchers in cognitive science and AI have attempted to identify such general mechanisms, e.g.
(How many others have been proposed? ART??, SOAR??, Edelman's 'Neural Darwinism'??)An alternative view is that different animals, and different sorts of robots will need to use different forms of learning, and that humans have discovered which sorts work well in different contexts and pass them on through various kinds of educational procedures. Some have claimed that there are different learning styles used by different human learners (which should also be recognised by teachers).
"To solve the adaptive problem of finding the right mate, our choices must be guided by qualitatively different standards than when choosing the right food, or the right habitat. Consequently, the brain must be composed of a large collection of circuits, with different circuits specialized for solving different problems. You can think of each of these specialized circuits as a mini-computer that is dedicated to solving one problem. Such dedicated mini-computers are sometimes called modules. There is, then, a sense in which you can view the brain as a collection of dedicated mini-computers -- a collection of modules. There must, of course, be circuits whose design is specialized for integrating the output of all these dedicated mini-computers to produce behavior. So, more precisely, one can view the brain as a collection of dedicated mini-computers whose operations are functionally integrated to produce behavior.
....
Biological machines are calibrated to the environments in which they evolved, and they embody information about the stably recurring properties of these ancestral worlds."
Beyond Modularity attempts a synthesis of Fodor's anti-constructivist nativism and Piaget's anti-nativist constructivism. Contra Fodor, I argue that: (1) the study of cognitive development is essential to cognitive science, (2) the module/central processing dichotomy is too rigid, and (3) the mind does not begin with prespecified modules, but that development involves a gradual process of modularization. Contra Piaget, I argue that: (1) development rarely involves stage-like domain-general change, and (2) domain-specific predispositions give development a small but significant kickstart by focusing the infant's attention on proprietary inputs. Development does not stop at efficient learning. A fundamental aspect of human development ("Representational Redescription") is the hypothesized process by which information that is IN a cognitive system becomes progressively explicit knowledge TO that system. Development thus involves two complementary processes of progressive modularization and rendering explicit.
SOME REFERENCES ON LANGUAGE EVOLUTION
(This is just a tiny subset, not representative.)
Marc D. Hauser, Noam Chomsky, W. Tecumseh Fitch (2002),
The Faculty of Language: What Is It, Who Has It, and How Did It Evolve?
Science 22 November 2002: Vol. 298. no. 5598, pp. 1569 - 1579Steven Pinker and Ray Jackendoff, (2005) The faculty of language: what's special about it?
Cognition, 95(2):201--236.
Also here(Reply to the above)
Fitch, W. T., Hauser, M. D., and Chomsky, N. (2005)
The evolution of the language faculty: Clarifications and implications. Also here.
Cognition, Volume 97, Issue 2, September 2005, Pages 179-210Hajime Yamauchi (2004) Baldwinian Accounts of Language Evolution PhD thesis (PDF),
Theoretical and Applied Linguistics, University of EdinburghMichael Arbib M. (2005). From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics.
Behavioral and Brain Sciences, 28(2):105--124.Preprint available
http://www.bbsonline.org/Preprints/Arbib-05012002/Beth Azar (2005)
How mimicry begat culture
APA Online Monitor on Psychology, Volume 36, No. 9 October 2005
(Includes a few more references)Michael C. Corballis,
From Hand to Mouth: The Origins of Language
Princeton University Press 2003Review by James HurfordAaron Sloman, (March 2007)
What is human language? How might it have evolved?
PDF Seminar presentation, arguing that if we define a notion of g-language (generalised language) to refer to forms of representation of information that include structural variability, some systematicity, compositional semantics (usually context sensitive), then internal g-languages evolved before human (external) language, are used by non-linguistic animals and pre-linguistic children, and explain both why sign languages are so easily learnt and how a community of deaf children could create an entirely new sign language. (Comments invited). Builds on The primacy of non-communicative language (1979)
Obviously being embodied is important when you are running, climbing trees, catching things, building a house, avoiding being eaten, working out whether you can reach something, sexually attracted, enjoying making music, and most of the time when growing from infancy to adulthood. But how important is it when you are lying flat on your back solving a mathematical problem, reading a book on ancient history, studying quantum mechanics, wondering how languge evolved? Is it perhaps more important that your ancestors were embodied?See the proceedings of the euCognition workshop http://www.eucognition.org/embodying_cognition_2006.htm
EMBODYING COGNITION: TOWARDS AN INTEGRATIVE APPROACH?
Palma de Mallorca, 14-16 December 2006
especially the excellent paper on The Workshop Theme by Toni Gomila and Paco Calvo.There was also A special issue on situated and embodied cognition in Cognitive Systems Research Volume 3, Issue 3 , September 2002, Pages 271-274
Edited by Tom Ziemke.
The controversies about embodiment are closely related to several other controversies in this web site, including:
- How 'cognition' should be defined.
- Whether all mental and neural mechanisms should be regarded as dynamical systems, and modelled as such.
- The role of neural mechanisms.
- Symbol-grounding vs symbol-tethering
- Whether only bottom-up research and emergent phenomena can explain
- Sensorimotor ontologies: Somatic vs Exosomatic
- Statistical vs structural models
- Concepts of causation required by intelligent systems: Humean or Kantian, or both?
- What sort of architecture does an intelligent system need?
- What is Symbolic AI - should it be rejected? Is it needed?
- Is information always in the eye of the beholder?
- Computation and Embodiment: Three issues, not two
Last updated: 14 Dec 2008 (with minimal formatting changes Leslie
Smith, Dec 2012)