Listen to this article
Although our understanding of the nature of sight is advancing, some big mysteries still need to be unravelled. Recognising everyday objects such as chairs and tables is child's play for humans. Computer vision experts, meanwhile, struggle to build machines with the same abilities.
While the function of the eye — among the body’s most complex organs — is to supply data on the visual environment to the brain, the capture of light and its conversion into electrical signals is just the first step in a long sequence of computations that results in our seeing colour or recognising faces.
The interpretation of the eye’s “data stream” occurs in the intricate circuitry of the brain. At a broad level, this circuitry is quite well understood. The brain is subdivided into distinct lobes. The occipital lobe (one of four in the primate brain) and a substantial part of the temporal lobe serve vision. These are tiled with more than 30 smaller brain areas, each of which is thought to contribute a distinct set of computations that transform information for the next stage of processing.
At a smaller scale, though, much less is known about the computations in the networks of neurons that ultimately cause our sensations.
This is partly because the problem the brain needs to solve is ill-defined. The data coming to it from the eyes are limited by the biological process that converts light into electrical signals. Humans can only, for instance, make use of light in a relatively limited part of the spectrum.
Eyes-to-brain data are also ambiguous, since many different states of the world can give rise to the same sensory information. For example, the eyes can capture only two-dimensional representations of the 3D world.
Large numbers of scientists and a great diversity of approaches are engaged in solving the puzzle of how the brain converts the deluge of data arriving from our eyes into a sense of sight. The annual meeting of the US Society for Neuroscience, for example, attracts some 30,000 attendees.
The toolkit available to neuroscientists — to address questions from the molecular level up to the behaviour of circuits of brain cells, and even the whole organism — is rapidly expanding.
At the microscopic scale, the recent development of optogenetics has enabled scientists to control the activity of individual neurons by shining light directly on to them, in effect providing an experimental on-off switch. This allows researchers to silence specific neuron populations in order to understand their role in shaping sensation and perception.
The potential of the technique to transform neuroscience has been recognised by — among others — the award of the prestigious Brain Prize for outstanding research by neuroscientists by the Danish industrial foundation, Lundbeckfonden.
These tools are both useful for understanding the functioning of healthy visual systems and in detecting what is disrupted after trauma or stroke, cases of neurodegenerative disease, or by abnormal development.
A hugely versatile set of techniques based on magnetic resonance imaging (MRI) allows scientists to measure the anatomy and function and even localised chemical changes in brain tissue. This has enabled them to tackle questions at the macroscopic scale, namely where objects are visible with the naked eye. To track tiny electromagnetic changes outside the skull caused by the activity of neurons with millisecond accuracy, scientists can now use magnetoencephalography, or MEG.
Such techniques permit a “look inside” the human brain while participants perform various cognitive tasks. By picking the right set of experimental questions, neuroscientists hope to put together the puzzle one piece at a time.