Connectionism: Difference between revisions

Content deleted Content added
Adding short description: "Approach in cognitive science that hopes to explain mental phenomena using artificial neural networks" (Shortdesc helper)
OAbot (talk | contribs)
m Open access bot: hdl, doi added to citation with #oabot.
Line 26:
 
===Biological realism===
Connectionist work in general does not need to be biologically realistic and therefore suffers from a lack of neuroscientific plausibility.<ref>{{Cite web|url=http://www.encephalos.gr/48-1-01e.htm|title=Encephalos Journal|website=www.encephalos.gr|access-date=2018-02-20}}</ref><ref>{{Cite book|url=https://books.google.com/?id=s-OCCwAAQBAJ&pg=PT18&lpg=PT18&dq=%22accurate%22#v=onepage&q=%22accurate%22&f=false|title=Neural Geographies: Feminism and the Microstructure of Cognition|last=Wilson|first=Elizabeth A.|date=2016-02-04|publisher=Routledge|isbn=9781317958765|language=en}}</ref><ref>{{Cite web|url=https://pdfs.semanticscholar.org/e953/59bc80e624a963a3d8c943e3b2898a397ef7.pdf|title=Organismically-inspired robotics: homeostatic adaptation and teleology beyond the closed sensorimotor loop|last=|first=|date=|website=|access-date=}}</ref><ref>{{Cite journal|last=Zorzi|first=Marco|last2=Testolin|first2=Alberto|last3=Stoianov|first3=Ivilin P.|date=2013-08-20|title=Modeling language and cognition with deep unsupervised learning: a tutorial overview|journal=Frontiers in Psychology|volume=4|pages=515|doi=10.3389/fpsyg.2013.00515|issn=1664-1078|pmc=3747356|pmid=23970869}}</ref><ref>{{Cite web|url=http://scholarworks.sjsu.edu/cgi/viewcontent.cgi?article=1015&context=comparativephilosophy|title=ANALYTIC AND CONTINENTAL PHILOSOPHY|last=|first=|date=|website=|access-date=}}</ref><ref>{{Cite book|url=https://books.google.com/?id=uV9TZzOITMwC&pg=PA17&lpg=PA17&dq=%22biological%20plausibility%22#v=onepage&q=%22biological%20plausibility%22&f=false|title=Neural Network Perspectives on Cognition and Adaptive Robotics|last=Browne|first=A.|date=1997-01-01|publisher=CRC Press|isbn=9780750304559|language=en}}</ref><ref>{{Cite book|url=https://books.google.com/books?id=7pPv0STSos8C&pg=PA63&lpg=PA63&dq=%22biological+realism%22#v=onepage&q=%22biological%20realism%22&f=false|title=Connectionism in Perspective|last=Pfeifer|first=R.|last2=Schreter|first2=Z.|last3=Fogelman-Soulié|first3=F.|last4=Steels|first4=L.|date=1989-08-23|publisher=Elsevier|isbn=9780444598769|location=|pages=|language=en}}</ref> However, the structure of neural networks is derived from that of biological [[neuron]]s, and this parallel in low-level structure is often argued to be an advantage of connectionism in modeling cognitive structures compared with other approaches.<ref name=":2" /> One area where connectionist models are thought to be biologically implausible is with respect to error-propagation networks that are needed to support learning,<ref>{{Cite journal|last=Crick|first=Francis|date=January 1989|title=The recent excitement about neural networks|journal=Nature|language=en|volume=337|issue=6203|pages=129–132|doi=10.1038/337129a0|pmid=2911347|issn=1476-4687|bibcode=1989Natur.337..129C}}</ref><ref name=":4">{{Cite journal|last=Rumelhart|first=David E.|last2=Hinton|first2=Geoffrey E.|last3=Williams|first3=Ronald J.|date=October 1986|title=Learning representations by back-propagating errors|journal=Nature|language=en|volume=323|issue=6088|pages=533–536|doi=10.1038/323533a0|issn=1476-4687|bibcode=1986Natur.323..533R}}</ref> but error propagation can explain some of the biologically-generated electrical activity seen at the scalp in [[event-related potential]]s such as the [[N400 (neuroscience)|N400]] and [[P600 (neuroscience)|P600]],<ref>{{Cite journal|last=Fitz|first=Hartmut|last2=Chang|first2=Franklin|date=2019-06-01|title=Language ERPs reflect learning through prediction error propagation|journal=Cognitive Psychology|volume=111|pages=15–52|doi=10.1016/j.cogpsych.2019.03.002|issn=0010-0285|hdl=21.11116/0000-0003-474F-6|hdl-access=free}}</ref> and this provides some biological support for one of the key assumptions of connectionist learning procedures.
 
===Learning===
The weights in a neural network are adjusted according to some [[learning rule]] or algorithm, such as [[Hebbian theory|Hebbian learning]]. Thus, connectionists have created many sophisticated learning procedures for neural networks. Learning always involves modifying the connection weights. In general, these involve mathematical formulas to determine the change in weights when given sets of data consisting of activation vectors for some subset of the neural units. Several studies have been focused on designing teaching-learning methods based on connectionism.<ref>{{Cite journal|last=Novo|first=María-Luisa|last2=Alsina|first2=Ángel|last3=Marbán|first3=José-María|last4=Berciano|first4=Ainhoa|date=2017|title=Connective Intelligence for Childhood Mathematics Education|journal=Comunicar|language=es|volume=25|issue=52|pages=29–39|doi=10.3916/c52-2017-03|issn=1134-3478|doi-access=free}}</ref>
 
By formalizing learning in such a way, connectionists have many tools. A very common strategy in connectionist learning methods is to incorporate [[gradient descent]] over an error surface in a space defined by the weight matrix. All gradient descent learning in connectionist models involves changing each weight by the [[partial derivative]] of the error surface with respect to the weight. [[Backpropagation]] (BP), first made popular in the 1980s, is probably the most commonly known connectionist gradient descent algorithm today.<ref name=":4" />