Tuesday, August 30, 2005

Simon Stringer Abstract

A dynamical systems approach to modelling brain function:
Embedding biophysically accurate neural network models in 3D virtual reality environments.

The power of modern computers now makes it possible to investigate the dynamics of neural processing in the brain through computer simulations of biophysically accurate neural network models embedded and self-organising within realistic 3D virtual environments. Two areas of brain function which are immediate candidates for this approach are object recognition in the ventral visual processing stream, and the representation of space in the hippocampus and related brain regions.

In self-organising neural network models of object recognition in the ventral visual stream, the spatiotemporal statistics of how objects move in the world are critical to how the network self-organises during learning, and whether transform invariant representations develop. Because of this, we have begun to use visual input from 3D virtual reality environments, in which the time evolution of the positions and velocities of visual stimuli can be carefully controlled, as well as the point of view of the network or agent within the environment.

Further work needs to be done to improve the biophysical accuracy of existing vision models, which typically rely on rate-coded Hebbian learning rules, and which form associations between all neurons which are co-active. Because of this limitation, current models are usually trained with only a single stimulus at a time. The incorporation of more accurate biophysical learning mechanisms, such as spike time dependent learning, may play an important role in enabling neural network models to process visual input from complex scenes with multiple features and objects.

Investigations of how the brain learns to represent 3D space and perform spatial navigation are also set to benefit from the use of 3D virtual reality simulations. For example, 3D virtual reality tools should have an important role to play in simulating the development of "place cells" found in the rat hippocampus, which appear to underpin the animal's ability to navigate. A key question is how do place cells develop their firing properties using low-level sensory inputs such as vision and idiothetic (self-motion) signals to guide the self-organisation of the synaptic connections. As above, further improvements in biophysical accuracy may be required in order to allow these models to process realistic visual input from complex environments.


Post a Comment

Links to this post:

Create a Link

<< Home