The symbol grounding problem

https://doi.org/10.1016/0167-2789(90)90087-6Get rights and content

Abstract

There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the “symbol grounding problem”: How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) iconic representations, which are analogs of the proximal sensory projections of distal objects and events, and (2) categorical representations, which are learned and innate feature detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their (nonsymbolic) categorical representations. Higher-order (3) symbolic representations, grounded in these elementary symbols, consist of symbol strings describing category membership relations (e.g. “An X is a Y that is Z”).

Connectionism is one natural candidate for the mechanism that learns the invariant features underlying categorical representations, thereby connecting names to the proximal projections of the distal objects they stand for. In this way connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling. Such a hybrid model would not have an autonomous symbolic “module,” however; the symbolic functions would emerge as an intrinsically “dedicated” symbol system as a consequence of the bottom-up grounding of categories' names in their sensory representations. Symbol manipulation would be governed not just by the arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes of the icons and category invariants in which they are grounded.

References (45)

  • J.A. Fodor et al.

    Connectionism and cognitive architecture: A critical appraisal

    Cognition

    (1988)
  • A. Newell

    Physical symbol systems

    Cognitive Sci.

    (1980)
  • N. Chomsky

    Rules and representations

    Behav. Brain Sci.

    (1980)
  • M. Davis

    Computability and Unsolvability

    (1958)
  • M. Davis

    The Undecidable

    (1965)
  • D.C. Dennett

    Intentional systems in cognitive ethology

    Behav. Brain Sci.

    (1983)
  • J.A. Fodor

    The Language of Thought

    (1975)
  • J.A. Fodor

    Methodological solipsism considered as a research strategy in cognitive psychology

    Behav. Brain Sci.

    (1980)
  • J.A. Fodor

    Pŕecis of the modularity of mind

    Behav. Brain Sci.

    (1985)
  • J.A. Fodor

    Psychosemantics

    (1987)
  • J.J. Gibson

    An ecological approach to visual perception

    (1979)
  • S. Harnad

    Metaphor and mental duality

  • S. Harnad

    Categorical perception: A critical overview

  • S. Harnad

    Category induction and representation

  • S. Harnad

    Minds, machines and searle

    J. Theor. Exp. Artificial Intelligence

    (1989)
  • S. Harnad, Computational hermeneutics, Social Epistemology, in...
  • J. Haugeland

    The nature and plausibility of cognitivism

    Behav. Brain Sci.

    (1978)
  • S.C. Kleene

    Formalized Recursive Functionals and Formalized Realizability

    (1969)
  • S.A. Kripke

    Naming and Necessity

    (1980)
  • A.M. Liberman

    On the finding that speech is special

    Am. Psychologist

    (1982)
  • J. Lucas

    Minds, machines and Gödel

    Philosophy

    (1961)
  • Cited by (2684)

    View all citing articles on Scopus
    View full text