Consciousness Studies/The Neurophysiology Of Sensation And Perception

Vision edit

The human eye edit

The eye is a remarkable optical instrument that is often poorly understood by students of consciousness. The most popular misconception is that there is a 'focus' within the eye through which all the light rays pass! The purpose of this article is to describe our knowledge of the optics of the eye so that such misconceptions can be avoided.

 

The eye consists of several surfaces at which refraction occurs: air-cornea, cornea-aqueous humour, aqueous humour-lens, lens-vitreous humour. The crude image forming capability of the eye can be represented quite accurately by the reduced eye model which involves a single optical surface (air-cornea). Optometrists use more accurate models such as the Gullstrand Schematic Eye, the Le Grand Theoretical and the LeGrand Simplified Eye.

The lens system at the front of the eye forms an inverted image on the retina.

The eye is about 23 mm deep from the front of the cornea to the back of the retina. The refractive index of the components of the lens system varies from about 1.33 to 1.39.

Light from every point of a field of view falls all over the surface of the eye. There is no 'point eye' and there is no ordered image between objects in the view and the retina except on the retina. The image on the eye has the form of an inverted mapping of 3D objects to a 2D surface. This is also the form of conscious experience so the images on the retinas are the closest physical analogues of phenomenal, visual, conscious experience (see Perspective below).

Perspective edit

Perspective describes how light from three dimensional objects is mapped onto a two dimensional surface as a result of the action of lenses of the type found in the eye.

 

Perspective is used by artists to create the impression of viewing a 3D scene. To do this they create a 2D image that is similar to the image on the retina that would be created by the 3D scene.

 

Naive Realists and many Direct Realists believe that the 2D perspective view is the way things are actually arranged in the world. Of course, things in the world differ from images because they are arranged in three dimensions.

Colour edit

The colour of an object can be represented by its spectral power distribution which is a plot of the power available at each wavelength. The unit of light power is the watt but the unit that is used to measure subjective illumination is the candela. One candela is the illumination due to light of a wavelength of 555 nanometres and a radiant intensity of 1/683 watts per steradian in the direction being measured. A steradian is a solid angle at the centre of sphere of one metre radius that is subtended by one square metre of the surface. The curious number 1/683 occurs because the unit was originally based on light emitted from a square centimetre of molten platinum. The wavelength of 555 nm is chosen because this is the wavelength of peak sensitivity for light adapted (photopic) vision over a large group of subjects. Light adapted vision is largely due to photosensitive cells in the retina called cones. The candela is fixed as a standard SI Unit for light at a wavelength of 555 nanometres.

The lumen is a subjective measure of the flux of light energy passing through a solid angle (a steradian). 683 lumens of light at 555 nm are equivalent to a watt passing through the solid angle. At a wavelength of about 520 nm only 500 lumens of luminous flux occur per watt because the visual system is less sensitive at this wavelength. The curve of sensitivity of the visual system to light is known as the V-lambda Curve. At a wavelength of about 510 nm the same radiant intensity is seen as being half as bright as at a wavelength of 555 nm.

 

Dark adapted (scotopic) vision has a peak sensitivity at a wavelength of 507 nm and is largely due to photosensitive cells called rods in the retina. Spectral Luminous Efficacy Curves are also used to express how the sensitivity to light varies with wavelength.

Phenomenal colours are due to mixtures of spectral colours of varying intensities. A spectral colour corresponds to a wavelength of light found on the electromagnetic spectrum of visible light. Colours have three attributes: brightness, saturation and hue. The brightness of a colour depends on the illuminance and the reflectivity of the surface. The saturation depends on the amount of white present, for instance white and red make pink. The hue is similar to spectral colour but can consist of some combinations - for instance magenta is a hue but combines two spectral colours: red and blue. It should be noted that experiences that contain colour are dependent on the properties of the visual system as much as on the wavelengths of light being reflected.

Any set of three colours that can be added together to give white are known as primary colours. There are a large number of colours that can be combined to make white, or almost any other colour. This means that a set of surfaces that all appear white could reflect a wide range of different wavelengths of light.

There are numerous systems for predicting how colours will combine to make other colours; the CIE Chromaticity Diagram, the Munsell Colour System and the Ostwald Colour System have all been used. The 1931 CIE Chromaticity Diagram is shown below:

 

See Chromaticity diagram for more information.

The retina edit

The retina contains photoreceptive cells called rods and cones and several types of neurons. The rods are generally sensitive to light and there are three varieties of cones sensitive to long, medium and short wavelengths of light (L, M and S type cones). Some of the ganglion cells in the retina (about 2%) are also slightly light sensitive and provide input for the control of circadian rhythms. A schematic diagram of the retina is shown below.

 

The photoreceptors hyperpolarise (their membrane potential becomes more negative) in response to illumination. Bipolar cells make direct contact with the photoreceptors and come in two types, on and off. The on-bipolar cells are also known as invaginating bipolars and the off-bipolars as flat bipolars. On-bipolars depolarise when light falls on the photoreceptors and off-bipolars hyperpolarise. Action potentials do not occur in the bipolar or photoreceptor cells.

The retinal neurons perform considerable preprocessing before submitting information to the brain. The network of horizontal and ganglion neurons act to produce an output of action potentials that is sensitive to boundaries between areas of differing illumination (edge detection) and to motion.

Kuffler in 1953 discovered that many retinal ganglion cells are responsive to differences in illumination on the retina. This centre-surround processing is shown in the illustration below.

 

The centre-surround effect is due to lateral inhibition by horizontally arranged cells in the retina.

The structure of the response fields of ganglion cells is important in everyday processing and increases the definition of boundaries in the visual field. Sometimes it gives rise to effects that are not directly related to the physical content of the visual field. The most famous of these effects is the Hermann Illusion. The Hermann Grid Illusion is a set of black squares separated by white lines. Where the white lines cross it appears as if there are grey dots.

 

The grey dots are due to the relative suppression of on-centre ganglion cells where the white lines cross. This is explained in the illustration below.

 

Notice how the grey dots disappear when the crossed white lines are at the centre of the visual field. This is due to way that ganglion cell fields are much smaller in the fovea.

There are many other retinal illusions. White's illusion is particularly strong and was believed to be due to centre-surround activity but is now thought to have a complex origin.

 

The grey lines really are the same shade of grey in the illustration. Mach's Illusion is another example of a centre-surround effect. Centre-surround effects can also occur with colour fields, red/green and yellow/blue contrasts having a similar effect to light/dark contrasts.

Lateral inhibition and the resultant centre-surround effect increases the number of cells that respond to boundaries and edges in the visual field. If it did not occur then small boundaries might be missed entirely if these fell on areas of the retina outside of the fovea. The result of this effect is everywhere in our normal visual phenomenal experience so not only is visual experience a mapping of 3D on to a 2D surface, it also contains shading and brightening at edges that will not be found by photometers that measure objective light intensities.

Photoreceptors become less responsive after continuous exposure to bright light. This gives rise to afterimages. Afterimages are usually of the opponent colour (white light gives a dark afterimage, yellow light gives a blue afterimage, red gives a green afterimage etc.). Afterimages when the eyes are open are generally due to a lack of response to a particular frequency of light within the white light that bathes the retina.

It is clear that visual phenomenal experience is related more directly to the layout and type of activity in the retinal cells than to things in the visual field beyond the eye.

Visual pathways edit

 

The lateral geniculate nucleus edit

Retinal ganglion cells project to the Lateral Geniculate Nuclei which are small bumps on the back of the thalamus. (Only 10-15% of the input to the LGN comes from the retina, most (c.80%) comes from the visual cortex). The neurons in the LGN are arranged retinotopically so preserve the layout of events on the surface of the retina.

The LGN are arranged in 6 layers. The top two are known as Magnocellular layers (about 100,000 neurons with large cell bodies) and the bottom four are called Parvocellular layers (about 1,000,000 neurons with small cell bodies). Between the main layers are the Koniocellular layers that consist of large numbers of tiny neurons.

The left Lateral Geniculate Nucleus receives input from the right visual field and the right LGN receives input from the left visual field. Each nucleus receives input from both eyes but this input is segregated so that input from the eye on the same side goes to layers 1, 3, 5 and from the other side to layers 2,4, 6.

The magnocellular layers contain neurons that have a large receptive field, are sensitive to contrast, a transient response and are not colour sensitive. The parvocellular layers contains neurons that have small receptive fields, are colour sensitive, have a prolonged response and are less sensitive to contrast.

The LGN pathway from the retina is largely connected to the striate part of the visual cortex (cortical area V1) via a set of fibres called the optic radiation. There are reciprocal connections between the Thalamic Reticular Nucleus and the LGN. The LGN are also interconnected with the Superior Colliculus and brainstem.

The LGN may be involved in controlling which areas of the visual field are subjected to attention (O'Connor et al. 2002).

The visual cortex edit

The input from the LGN goes mainly to area V1 of the cortex. The cortex is arranged in six layers and divided up into columns. Each column in the visual cortex corresponds to a particular area of the retina in one eye. The columns are arranged in rows called hypercolumns. Each column within a hypercolumn responds to a different orientation of an optical stimulus at a given location (so responds to edges/boundaries that are oriented in the visual field). Hypercolumns from each eye are arranged alternately and form a small block of cortex called a pinwheel. At the centre of each pinwheel are colour sensitive cells that are usually not orientation sensitive. These coincide with the "blobs" that are seen when visual cortex is viewed using cytochrome oxidase dependent stains. It is important to note that the "hypercolumns" merge into one another and respond to line stimuli that cover an area of retina so they may be physiological rather than anatomical entities.

The blind spot in each eye is represented by an area of visual cortex that only receives monocular input from the other eye (Tong & Engel 2001). The effect of the blind spot is illustrated below:

 

Normally it seems that the blindspot is 'filled in' with background when one eye is used. However, Lou & Chen (2003) demonstrated that subjects could respond to quite complex figures in the blind spot, although how far they were investigating 'blindsight' rather than visual experience in the blind spot is difficult to determine.

Different layers in the visual cortex have outputs that go to different locations. Layer 6 sends nerve fibres to the Lateral Geniculate Nuclei and thalamus, layer 5 to superior colliculus and pons, layer 2 & 3 to other cortical areas.

There are two important outputs to other cortical areas, the ventral stream and the dorsal stream. The ventral stream processes colour, form and objects. It proceeds to the inferior (lower) temporal cortex. The dorsal stream processes motion, position and spatial relationships. It proceeds towards the parietal cortex. Lesions in the ventral stream can result in patients knowing where an object is located but being unable to enumerate its properties, on the other hand, lesions to the dorsal stream can result in patients being able to label an object but unable to tell exactly where it is located.

There is also a large output from the visual cortex back to the thalamus, this output contains more fibres than the thalamo-cortical input.

Depth perception edit

The world is three dimensional but the image on the back of the retinas is two dimensional. How does the brain give the subject a perception of depth?

Depth perception relies on cues which are data about the displacement of things relative to the body. These cues consist of:

  • the convergence of the eyes
  • the accommodation of the lens
  • binocular disparity -the difference between the images on the retinas- this was first suggested by Wheatstone.
  • motion parallax - distant objects move slower when the observer moves - first suggested by Helmholtz.
  • optical flow - the rate of expansion/contraction of a scene with movement towards or away from it (Lee & Aronson 1974).
  • binocular occlusion - parts of a scene are invisible to each eye.
  • body motion provides cues about near objects.
  • vanishing points - the convergence of parallel lines.
  • numerous other cues such as size constancy, texture etc.

Binocular disparity has been most extensively studied as a source of depth cues. When the eyes converge to focus on an object in from of them there is very little disparity in the images of that object on the two retinas. The angle at the object formed between the lines that project back to the pupils is known as the vergence at the object. The sphere where all objects have the same vergence is known as the horopter.

 

When the disparity between the retinas is small a single image occurs in phenomenal experience which is accompanied by a sensation of objects with depth. This is known as stereopsis. If the disparity between the retinas is large double vision ensues, this is known as diplopia. The curious feature of stereopsis is that we can see no more of the object than is visible on the retinas and certainly cannot see behind the object. Stereopsis is more like a stretching of 2D space than actual 3D.

The empirical horopter is a zone where things are seen without diplopia. The empirical and Veith Muller (geometric) horopters are different. This difference is the result of both processing by the CNS and optical factors.

Physiological diplopia refers to the stimulation of receptors in different parts of the retinas of the two eyes by the same object. Physiological diplopia does not always give rise to subjective diplopia, objects close to the empirical horopter do not give rise to double vision and the zone in which this occurs is known as Panum's Fusion Area. It is widest for objects that are distributed away from the nose (with 'temporal' locations) and for objects that are slow moving and poorly focussed.

In the review by Cutting and Vishton (1995) the contributions of each type of cue is discussed. Cutting and Vishton also present evidence that there are several zones of depth perception that are informed by different sets of cues. These are personal space, which is the zone of things within arms reach, action space, which is the zone in which we interact and where our motions have a large impact on the perceived layout, and vista space which is the zone beyond about 30m that is informed by long range cues.

The interesting feature of 3D perceptual space is that it is not seen. The sides of a solid object appear as intrusions or lateral extensions in 2D space, when we close an eye that has access to the side of the object and then open it again the side grows out into 2D space. The lack of 'seeing' depth is also evident when we close one eye when looking at a vista - nothing seems to change even though stereopsis has gone. This leaves the problem of what it is that constitutes the 'feeling' of depth. We have feelings that we can fall into space or move into it or around in it. Depth seems to be defined by premotor modelling and the potential for occupancy by our bodies and limbs. As such it involves qualia that are different from those of vision and more akin to those that accompany movement, as an example, if you reach out to touch something, move the hand back, then consider the distance to the object it is evident that a feeling of the movement is still present. Is depth a quale of movement modelled during the extended present of perception?

  • Cutting, J.E. & Vishton, P.M. (1995) Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth. In W. Epstein & S. Rogers (eds.) Handbook of perception and cognition, Vol 5; Perception of space and motion. (pp. 69-117). San Diego, CA: Academic Press. http://pmvish.people.wm.edu/cutting&vishton1995.pdf

Modules