Biological plasticity is the ability of an organism to adapt its biological structure to conform to certain environmental changes. Determining what causes this adaptation has proven to be nontrivial, even with the recent advances in biology and neuroscience. This paper will discuss some of the models which have been proposed for the plasticity of the human brain, as well as different computational models.

Neuron edit

Neurons are electrically excitable cells that form the fundamental structures of the nervous system. Neurons are typically composed of a soma, or cell body, a dendritic tree and an axon. The classical model predicts that signals approach the neuron at the dendritic tree, it is 'processed' within the soma, then the signal is propogated down through the sxon onto the next neuron. Neurons can signal eachother either through chemical synapses or, less commonly, through electrical synapses. According the the Neuron Doctrine, neurons are the basic structural and functional units of the nervous system, and they work independently as opposed to forming a hard wired circuit.

Dendrite edit

The dendrite receives the chemical input from the presynaptic neuron through synapses, and propogates it both spatially, summing all incoming signals, and temporally, summing several rapidly arriving signals, to the soma. Recent experiments indicate that, through backpropogating action potentials, the dendrite can send back signals to the dendritic arbor, depolarizing the dendrite, and thereby attributing to long term potentiation.

Soma edit

The soma contains the nucleus which produces the nerve growth factor, and regulates most of the ion pumps in the cell. ???

Axon edit

The axon, which may or may not be present in the neuron, conducts electrical impulses away from the soma, and towards the synapses. It is surrounded by a myelinated sheathe of glial cells which acts to insulate the axon and provide for better conductance. The emergence of the axon from the soma is designated the axon hilcock, and it has the highest concentration of voltage dependent ion channels, which makes it the most easily-excited part of the neuron and the spike initiation zone for the axon. Eventually, the axon sprouts off numerous terminals to interact with multiple target neurons.

Axon Terminal edit

At the end we are left with the axon terminals which pass the signals provided by the dendrites, and propogated by the soma onto the postsynaptic cells. As the impulse charge gnerated by the soma arrives at the chemical synapses, it causes an influx of calcium ions, which cause the vesicles, which happen to be docked at the synapse, to fuse with the cell membrane. When they do, the neurotransmitters are released, diffused across the synaptic cleft, and then binded with neurotransmitter receptors on the postsynaptic synapses, which open nearby ion channels, causing ions to move in or out, and thereby changing the postsynaptic membrane potential. Glial cells take in any neurotransmitters not absorbed by the dendrites.

Impulse propogation edit

The lipid bilayer makes for an extremely inneficient conductor of electric charges; to overcome this, the neuron propogates ionic solutions instead. At rest, the neuron's intracellular fluid is negative with respect to the extracellular fluid, by oppening and closing ion gates along the cell membrane, the neuron can become hyperpolarised, and thereby initiate an action potential, which propogates down the axon and branches out to the postsynaptic dendrites. An action potential is a rapid change of the polarity of the voltage from negative to positive and then vice versa, the entire cycle lasting on the order of milliseconds. The action potential is characterized by a rising phase where the cell depolarises, a peak whereby the action potential is propogated, a falling phase where the cell hyperpolarises, and an undershoot during which the cell can not engage in another action potention. At rest, the difference between the concentration of ions on the inside of the cell membrane is different from, and generally negative with respect to, the outside of the cell membrane, and therefore, in the absence of action potentials, the cell membrane would have to maintain a potential; this potential is refered to as the resting potential, and is usually -70mV. The term resting potential is slightly misleading since the cell has to constantly transport ions in and out of the cell to maintain the concentration of ions. At rest, the concentration of sodium ions is greater in the extracellular fluid, and the concentration of pottassium ions is greater in the intracellular fluid and an equilibrium is maintained by the sodium-potassium pumps which transport two ions of potassium into the cell, for ever three ions of sodium moved out. At rest, the potassium ions move across the membrane through the pottassium leak channels, yet the sodium ions can not leak through the voltage gated sodium channels, this therefore pulls the membrane potential closer to the potassium equilibrium potential (-80mV). As the membrane becomes depolarised, some voltage gated sodium channels open, and sodium rushes into the negatively charged cell. As sodium enters the cell, the cell membrane potential becomes more positive, which activates even more sodium channels in the membrane, and the sodium influx overtakes the pottassium eflux through the pottasium leak channels initiating a positive feedback loop. At around +40 mV the voltage gated sodium channels begin to close and the voltage gated pottassium ions begin to open, and pottassium begins to rush out of the cell. The pottassium channels exhibit a delayed reaction to the membrane repolarisation, and even as the resting potential is achieved, some pottassium continues to flow out, causing what is known as the 'undershoot' phase, whereby the intracelluar fluid is more negative than the resting potential. Once this initial action potential is activated it propogates down the length of the axon. As one region reaches its peak and becomes depolarized, it nudges adjacent positive ions down the axon, sending a wave of positivity down the axon, without any ions moving too far while attracting negative ions away from the adjacent cell membrane. As the adjacent region of the membrane is depolarised, the voltage gated sodium ions open and sodium rushes into the adjacent region, and all the way down the axon a chain reaction of action potentials is maintained.

NeuroPlasticity edit

Starting off from a very high level view of biological plasticity we have neuroplasticity, which can be considered to be the plasticity of the brain (although its principles also apply to other areas of the body were nerves interact with muscle tissue). An intriguing aspect of neuroplasticity is that not only can certain clusters of the human brain alter their concentrations and firing rates, but they can migrate altogether, to a different area of the brain, with repeated learning. Studies have shown that cortical maps (certain parts of the body, like the hands, being mapped to particular parts of the brain, like the cerebellum) can alter over time, as is the case when the brain repairs itself from trauma, or from the loss of a limb. Most of the research in this field, however, has been constrained to synaptic plasticity, as neuroscientists are interested in understanding how the brain functions at its lowest levels.

Synaptic Plasticity edit

Neuroplasticity describes how the human brain manages to learn over time, and adapt to its environment, and it does a very good job at that. Unfortunately, it is not particularly helpful to computer scientists because it takes an approach which is too high level. It does not present model which can be directly mapped onto a computer, instead it provides a means of reasoning, and expects the computer scientists to fill in the gaps. It comes as a bit of a surprise then, that it was a psychologist by the name of Donald Hebb who first proposed the existence of synaptic plasticity in his famous learning rule which can be paraphrased as "cells that fire together, wire together" . He believed that two neurons that are simultaneously active when the presynaptic neuron fires are likely to strengthen their connection together.

When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased

In 1986 Hebb's hypothesis was further corroborated by two scientists Montalcini and Cohen who discovered what they coined to be the nerve growth factors which strengthen the bond between two neurons. They postulated that the following process would take place when a presynaptic neuron A excited a postsynaptic neuron B

  1. Neuron A spikes
  2. Neurotransmitters release charge in form of ions into synaptic cleft
  3. Neuron B releases nerve growth factor into the synaptic cleft
  4. Nerve growth factor binds onto TrkA receptors of neuron A

It is postulated that it is these nerve growth factors which increase or decrease the strength of neuronal cohesion. Another consequence of this is that a presynaptic neuron with a rapid firing rate will be more tightly coupled with its postsynaptic neuron, then had it fired less rapidly. In a sense we find that the firing rate also affects the strength of the neurons

nerve growth factors


Spike timing dependent plasticity edit

This is a relatively new and incredibly amazing discovery made by Henry Markram in 1994, it can be considered to be an extension of the Hebb learning rule. Hebb claimed that neurons strengthen their bonds together if they are simultaneously active during excitation, however, when Hebb made that statement, the technology to accurately measure time was not as precise as it is now. Markram discovered that the optimal situation for two neurons was not to simultaneously be active, but for a very slight window to be present. Markram postulated that synapses increase their efficacy if the presynaptic neuron is activated momentarily before the postsynaptic neuron is activated, momentarily here referring to a window of 5-40 ms. This model does not refute the previous model of plasticity, whereby nerve growth factors are attributed to being the main reason for increasing synaptic efficacy, it merely takes a higher level view of the situation, assumes the nerve growth factors are taking place in the background, and presents a model for the optimization of synaptic efficacy.

Polychronization edit

In 2006 Izhikevich proposed a radical new approach to understanding how neurons arrange themselves. Prior to this paper, it was assumed that presynaptic neurons worked independently of other presynaptic neurons, and, therefore, only increased their efficacy by correctly timing their spike on par with the Spike timing dependant plasticity rule. Izhikevich presents a new model which proposes that neurons can work together and excite a (single) postsynaptic neuron to achieve a postsynaptic response which is greater than had they all acted independently. Izhikevich's paper helps to explain the presence of percise spike timing dynamics in the brain, even with the presence of axonal delays. He proposes a new term, polychrony, which Hebb no doubt would describe as "cells that arrive together, wire together". An important note is that neurons need not be synchronized (fire together) to arrive at the postsynaptic cell simultaneously. Due to axonal delays, certain presynaptic neurons must fire at different times, in order for all of them to arrive at the target simultaneously. Izhikevich puts forth the notion that neurons can form 'polychronized groups' and act collectively.

An interesting, and often overlooked, consequence of this is that through subset construction there can be many more polychronized groups than neurons in the brain, since each neuron can be in more than one polychronized group. If one adds to this the fact that delays in propagation are also factored into this equation, then we get a near infinite system. Unfortunately, this makes it much more computationally intractable

 

What are the models around for plasticity edit

 
The general model of networks involves a hidden layer between the input and the output

We will constrain ourselves to artificial neural networks which possess learning algorithms, and we will look at both feed forward and recurrent nets, in both supervised and unsupervised environments.

Single Layer Perceptron edit

 
the appropriate wieghts are applied to the inputs that passed to a function which produces the output y

An extremely basic form of feed forward network where the input is assigned a weight, and fed out to the output. The weights are updated according to the following function

  •  

where:

  •   denotes the j-th item in the input vector
  •   denotes the j-th item in the weight vector
  •   denotes the output
  •   denotes the expected output
  •   is a constant and  

therefore, the weights are only updated when the output differs from the desired output. A consequence of this is that a Perceptron incorporates supervised learning.

Multi-layer perceptron edit

 
Multilayer perceptron solving the XOR function, a known non linearly seperable problem

The problem with single layer perceptrons is that they can only solve linearly separable problems. A multi-layer perceptron generally includes a hidden layer with different thresholds for its nodes and it is not confined only to linear separable problems. Both the single layer and multi layer perceptron are simple designs to program, and are computationally tractable. The running time is constrained by the dot product which is  


Adaline edit

Adaline is an extension of the Perceptron model, the only difference being that that in ADALINE we achieve efficiency by minimizing the least squares error function:

  • d is the desired output
  • o is the actual output

 

Radial basis function edit

?

 
Architecture of a radial basis function network. An input vector x is used as input to several radial basis functions, each with different parameters. The output of the network is a linear combination of the outputs from radial basis functions. Approximations are performed on streams of data rather than on complete data sets

Kohonen self-organizing network edit

These are really interesting because they are unsupervised. All the previous network examples involved examining the output, and comparing it to some desired output. In the self organizing map, neurons learn to map information from the input space to output coordinates. Interestingly enough, all this can be done without any knowledge of what the expected end result should be, and the input and the output don't need to have the same dimensions. What happens is that the neurons produce a low level representation of the input data, but still manage to preserve the topological structure of the input. In a Kohenen network each input unit is connected to every neuron, which creates a lot of edges. It is loosely based on how the visual systems handles information, the kohenen network associates different parts of its map to respond similarly to certain input. The input is assigned to a node as follows

  1. when the input arrives its euclidian distance to all weight vectors is computed
  2. BMU (best matching unit) neuron is designated to it, being the neuron with the most similar weight vector
  3. The weights of the BMU are adjusted towards the input unit
  4. The weights of the neurons close to the BMU are adjusted towards the input (decreasing intensity with distance)

once the network has iterated over a large number of input cases, called the training process, the mapping process generally becomes routine. The computational constraint of Kohenen maps is the incredibly high number of edges required between the inputs and all the nodes.

Simple recurrent network edit

This is a standard feed forward net, much like a perceptron, with an additional layer alongside the input unites called the context units. Every middle layer has a connection to the context units, and after every iteration the context units are continually updated. Therefore, when the output is back propagated and the learning rule applied, the network now has the previous input units as well, and it is in a better position to make predictions.

Hopfield Network edit

This is just a recurrent network where all the connections are symmetric. The symmetry ensures that the system will never engage in chaotic behaviour.

Stochastic neural networks edit

Just like a Hoppfield network except it is stochastic.

What are the challanges of running them tractably edit

The greatest challenge of simulating the brain on a computer is that we are simulating it on a computer. To illustrate a picture consider this:

  • the worlds fastest computer can handle 360 tera FLOPS whereas the most malnourished human being can do 10 peta FLOPS

In other words, the human brain can handle thirty times as much information. Another problem is that we don't fully understand how the human brain works, therefore, it is almost futile to try to model such a fast processing unit onto an inferior system. My personal opinion is that computers are poorly designed. We are trying to take machines which only see the world in one dimension, and only work in units of either one or the other (binary), and we would like to model on that a parallel processing, chemical based system. Even if we do manage to precisely create a mapping of the chemicals and their affects to a table and its appropriate functions, we're still left with the problem that we have a binary machine that's only capable of sequential processing. If the processor was capable of say a quadrillion FLOPS, then we could model 100 billion neurons easily, by assigning 10,000 FLOPS for simulation of a neuron. But that's assuming we fully understand how a neuron works. We could use those 10,000 FLOPS to model the neuron with dendrites, synapses, neurotransmitters, ion channels, lipid bilayer and axonal delay, but what good is neurons interacting with one another, if they don't produce an output? If they don't produce a desirable output? We still don't understand exactly how memory is stored. In a perfect world we'd be able to follow all of the spikes shooting off as a human being tries to solve a rubik's cube. I sometimes wonder if scientists stumbled onto digital computers, and against their will and better judgement chose to stick with it. I personally believe that the brains greatest asset is that fact that it doesn't disassociate between random memory and memory store. The brain is very versatile in that it uses the same paths for signal transmission, for memory retention. It pains me that we have models such as polychronized groups, but as of yet we can't use them because of the shortcomings of the digital computer. Perhaps we can still look forward to the possibilities of the quantum computer or the digital computer. brain vs computer