James A. Bednar's Research

My research concentrates on biologically realistic computational modeling of human and animal visual systems. The research is driven by two underlying goals: (1) to understand the computational principles underlying biological vision, and (2) to validate these principles by building and testing artificial vision systems. Computational modeling addresses both goals at once: computational models allow features of biological systems to be explored in detail, and they can themselves be functional artificial vision systems.

We focus on computational models that are built at a scale large enough to encapsulate significant visual processing, which requires simulating at least several square millimeters of visual cortex. At the same time, the models are designed to include enough of the details of the biological systems for them to be directly relatable to experimental results, and to capture the functionally relevant aspects of the system rather than just e.g. the large-scale organization of it.

This page describes some of the major topics of research in my lab, the Computational Systems Neuroscience Group. Follow the links to find overviews, publications, and (occasionally) demos in each research area.

Reproducible Modeling

Want to know all of our secrets to making a good model of the visual cortex? You can now automatically reproduce the complete process behind our latest 2013 modeling paper in J. Neuroscience, from the very start of specifying each aspect of the model, to running thousands of jobs testing its performance, to analyzing the results, and finally to building the publication-ready figures and compiling the rest of the final paper. The flexible, practical workflow we use for gradually automating and capturing our results is described in our 2013 Frontiers paper, and can easily be adapted for doing reproducible exploratory research with other models, other simulators, and even other research areas. With no modifications, it already represents a complete, ready-to-run example that lets you pick up right where we left off, allowing you to get started doing novel research in this field right away. Everything is open source, freely available, and ready to go. Understanding the brain is going to take more work than we can ever do ourselves, so please join the cause!

(2013 Frontiers in Neuroinformatics paper describing the workflow; IPython Notebook specifying the GCAL model (described below); IPython Notebook showing the steps for reproducing the results from the 2013 J. Neuroscience paper.)


GCAL

Icon

The GCAL (Gain Controlled, Adaptive, Laterally connected) model of the early visual pathway shows how a small number of biologically plausible mechanisms can give rise to a very wide range of functionally relevant phenomena that have been observed in the visual cortex. Starting from a randomly connected but topographically organized network of simple neurons, connection strengths are modified in response to incoming activity patterns, causing neurons to become selective for certain kinds of inputs. This homeostatically regulated self-organizing developmental process gives rise to biologically realistic receptive fields, tuning curves, surround modulation, aftereffects, and maps, without any requirement for genetic prespecification or top-down supervision. The model thus represents a compact, plausible hypothesis for the bulk of the observed functional properties of V1.

Results from GCAL and its predecessor LISSOM covering orientation, ocular dominance, motion direction, disparity, spatial frequency, and color are described in the comprehensive J. Physiology 2012 paper. Full GCAL implementations and tutorials are included in the freely available Topographica simulator. Previous results with LISSOM formed the basis of our 2005 book from Springer, which described the project and its results in complete detail, superseding all previous LISSOM publications.

(2013 J. Neuroscience paper defining the basic GCAL model, demonstrating its robustness, stability, and map quality, and justifying each of the mechanisms; 2012 J. Physiology Paris paper showing the range of phenomena that can be explained with GCAL; demo of map development in ferret vs. GCAL.)


Spatiotemporally Calibrated Model of Development

Previous developmental models have generally been quite abstract, focusing on explaining geometric map patterns or a few receptive fields, rather than accounting for the directly functionally relevant features of biological neurons. Jean-Luc Stevens and Philipp Rudiger and I have developed a GCAL variant that is calibrated rigorously against data from the macaque monkey, both for spatial extents of map features and connection patterns, and for the temporal pattern of responses to new stimuli. Interestingly, even though GCAL was not designed for this low-level temporal behaviour, GCAL required very little change to closely match measurements of peri-stimulus-time histograms of LGN and V1 cells. These results suggest that the lateral interaction processes modeled in GCAL also lead inevitably to transient temporal responses that highlight changing stimuli.

(Posters at CNS 2012 and SfN 2012.)


Aftereffects

Neural Computation cover

The adult visual cortex is more stable than that of the developing infant, but adaptive processes are evident even in such mature systems. For instance, orientation perception is affected by recently viewed patterns, a phenomenon known as the tilt aftereffect (TAE). My work shows that this effect can result from the same self-organizing developmental processes that drive development in GCAL.

In GCAL, these effects result from adapting lateral inhibition followed by normalization of synaptic strengths. Unlike previous TAE models, GCAL provides a simple explanation for both direct effects (repulsion between small angles) and indirect effects (attraction between large angles). Also unlike other models, GCAL clearly shows the functional relevance of this behavior: it serves to remove redundancy in the stream of visual inputs over time, greatly improving the ability of the system to detect small changes in orientation. This work demonstrates that the same fundamental learning processes that drive the initial development of the cortex may also be operating in the adult over short time scales.

Other more recent work (with Julien Ciroux) shows that similar processes can account for the McCollough effect in color vision, and (with Chris Ball) for the motion aftereffect (waterfall illusion). Modeling and psychophysical work with Roger Zhao (in collaboration with Peter Hancock at Stirling) also suggests that higher level aftereffects for face perception share many of the same mechanisms.

(2011 Vision Research paper on face aftereffects; 2000 Neural Computation paper on TAE; 2005 MSc thesis on McCollough effect; ECVP 2008 poster on face aftereffects; 2006 CNS poster on MAE; live demo of the TAE.)


Surround Modulation

Visual cortex neurons are not simply feedforward linear filters; instead they are strongly modulated by signals from neurons that respond to adjacent or more distant areas of the visual field. A bewildering array of such effects have been demonstrated, but a general theory for surround modulation is lacking. Judith Law, Jan Antolik, and I have succeeded in unifying previously disparate models for surround modulation and map development, and are investigating the idea that the variety of modulation effects reflects the variety of neuron types and interconnections that arise through development. The results suggest that neural output is continuously modulated to suppress redundancy and highlight changes relative to both the recent and the long-term history of visual experience. The model also shows how the Mexican-hat connectivity of previous developmental models can be implemented using biologically plausible mechanisms that do not require very long-range inhibition. The resulting model reproduces much of the diversity found in single-unit recordings, showing how this diversity can be related to the map patterns and the connectivity that underlies the map patterns.

(CNS 2007 talk, SfN 2006, Jan's 2010 PhD thesis and 2007 poster; paper in preparation.)


Development of Maps for Complex Cells

Nearly all models of map development have focused on simple cells, such as those primarily found in the input layers of V1, whose selectivities can be summarized by a simple receptive field plot. However, the actual neurons for which functional maps are typically measured in animals are complex cells, which are largely invariant to the spatial phase (detailed position) of input patterns). Previous models have shown how complex cells can develop by grouping outputs from several simple cells, but have relied on arbitrary and biologically implausible mechanisms for doing so. We have constructed models of maps of simple and complex cells that develop matching, robust orientation maps in both populations, random spatial phase preferences in simple cells (as found experimentally), and a realistic range of simple and complex cell types. The model predicts that smooth (though weak) maps for spatial phase will be present in layer 2/3, which could potentially be measured experimentally.

(2011 paper in Frontiers in Computational Neuroscience; talk at SfN 2008; posters at FENS 2008 and Neuron Satellite Meeting 2007.)


Color Maps in V1

Neural processing of color differs dramatically between the photoreceptors, retinal ganglion cells, and color-selective cells in V1. The organization of neurons in V1 and V2 has been found to reflect perceptual color categories, suggesting that this organization could be important for determining similarities and differences in perceived color. How this organization develops is not yet known, but Judah De Paula, Chris Ball, and I have developed V1 models that show how it could arise through Hebbian learning from color natural images. The same rules that govern this development also lead to the McCollough color/orientation aftereffect in the model, suggesting that similar processes also occur over short time scales during color perception in adults. We are now looking at how these maps interact with those for ocular dominance, and how the subcortical circuitry for color can be constructed.

(Talk at CNS 2007; posters at SfN 2004 and 2009; Judah's 2007 PhD thesis; short paper from CNS 2004; longer paper in preparation with Chris Ball.)


Rodent versus Carnivore Orientation Maps

Rodents appear have randomly organized feature preferences in V1, in stark contrast to the smooth, ordered maps typical of higher predatory mammals like carnivores and primates. The overall circuitry and structure of the visual cortex appears similar across areas and across species, and so it is very interesting to consider why rodent V1 should have such a different architecture. Using data from two-photon imaging obtained from mouse by our collaborator Thomas Mrsic-Flogel (University College London), Judith Law and I evaluated a number of hypotheses for how this disorder could arise and whether it is functionally significant. Interestingly, nearly all of our hypotheses were sufficient to account for the observed disorder in rodent maps, which suggests that there may be many reasons for such disorder. However, few of these mechanisms affected the functional properties of the network, suggesting that these differences are not likely to be crucial for visual perception.

(Judith's 2009 PhD thesis)


Whisker Maps in Rodent Barrel Cortex

Barrel cortex in rodents shares many similarities with primary visual cortex of higher mammals, and contains detailed representations of sensory inputs from the animal's whiskers. Maps for direction of whisker deflection have been found in these areas, and Stuart Wilson and I (in collaboration with Tony Prescott, University of Sheffield) have built a simple model that explains how these maps could arise and why their global alignment matches the pinwheel of possible directions. The model predicts that the global organization results from a correlation between whisker deflection direction and the orientation of the leading edge of the stimulus. In current work, Stuart is testing this prediction by building mechanical whiskers to collect detailed data about the patterns of whisker stimulation during encounters with objects.

(PLoS CB paper 2011; PLoS ONE paper 2010; Stuart Wilson's 2007 MSc thesis; Barrels 2007 poster; SfN 2009 poster; BBC News article; BIOTACT project)


Constructing Complex Systems by Pattern Generation

Computational models can develop realistic cortical structures when presented with approximations of the visual environment. However, the brain already has significant structure at birth, so environmental inputs cannot account for all of this self-organization. This research project explores a surprisingly simple but very effective way that an organism's genome can specify detailed cortical structures, by generating training patterns internally. The end result is that genetic information is expressed through the same robust learning mechanisms that also incorporate information from the environment. Simulations using genetic algorithms that can select between pattern generation and hardcoding show that pattern generation followed by learning can achieve better results than learning or hardcoding alone, under a wide range of conditions.

(IEEE Evolutionary Computation 2007 paper; ICDL 2006 paper; GECCO 2005 paper (same material as IEEE paper; winner of a GECCO 2005 Best Paper award))


Pre and Postnatal Development of V1 Maps

Ferrets and cats develop orientation maps even when raised in darkness, suggesting that map development is driven by internal processes. However, the maps also show influences of postnatal visual experience, indicating that they cannot simply be hardwired. As explored in abstract cases under Pattern Generation above, the initial development may be driven by spontaneous patterns of visual system activity before eye opening. The internally driven period may serve to make subsequent learning from the environment more robust and less susceptible to environmental fluctuations. Simulations by Stefanie Jegelka and I have shown how maps can develop before birth and then smoothly incorporate environmental influences, and that both internal and external sources of activity are necessary to explain the experimental data.

(My 2002 PhD thesis; CNS 2003 paper (included in thesis); CNS 2006 paper.)


Development of Face Processing

Neural Computation cover

Models of V1 can be closely grounded in experimental results from animals, but ultimately we will want to understand higher level processing, much of which can only be studied in humans. Because such studies cannot normally be invasive, very little detailed information is available so far, and so modeling can be useful for evaluating possible hypotheses that cannot be tested directly. In this project, Risto Miikkulainen and I examined the evidence for face-processing abilities at birth, and showed that the available evidence could be accounted for by a model that begins with some face-specific circuitry, but constructed from a set of internally generated patterns rather than being hardwired. This speculative work was designed to present a minimal hypothesis that could account for the data, and to suggest ways that the quality of the data could be improved to determine whether any such face-specific circuitry is necessary to explain the capabilities at birth and during early postnatal development in humans.

(My 2002 PhD thesis; contains material from Neural Computation 2003 paper and CogSci 2002; shorter summaries are in invited 2007 and 2003 book chapters.)


Scaling Maps

Icon

Similar cortical areas such as V1 can differ in size over several orders of magnitude between species. It is known that many of the properties of V1 neurons (such as connection lengths) do not scale appropriately as size increases, and thus that some of the mechanisms of cortical areas must differ between species. I have devised a set of scaling equations for GCAL and similar cortical models that show how perfect scaling can be calculated. These equations can be used to explore differences between actual and perfect scaling between species. They also make it practical to substitute a less detailed simulation when appropriate, to reduce computational requirements, while allowing the results to be applied directly to more realistic models. These equations form the basis for the Topographica simulator, which allows users to choose the number of neurons to use for a particular simulation at run time, without requiring any software or parameter changes.

(Neuroinformatics 2004 paper, short CNS 2001 paper.)


Situated, Embodied Perception

In the long run, understanding how perceptual capabilities arise will require understanding the detailed context in which animals and people develop and operate. Current models have been able to use simple proxies for this context, such as natural images, but such proxies are valid only for low-level features such as contrast edges. Developing neurons selective for higher level features such as objects and places will require training data that incorporates the patterns of sensory inputs experienced in early development. To create such training patterns, James Adwick and I have developed realistic virtual reality environments (based on Blender) for situating animals in natural scenes, Celia Fillion and I developed models using real-time, stereo camera input from a situated real-world agent, Bharath Chandra Talluri and I have developed training databases based on the visual experience of laboratory animals in cages, and Stuart Wilson and I are working to collect multisensory experiences of rat pups as they develop in huddles of other littermates. Together this work should allow much more realistic simulations of the actual sensory experience of animals, and how their sensory systems develop as a result.

Topographica

Computational modeling of large-scale cortical map structures is difficult with existing tools, which primarily focus on either low-level models of individual neurons or high-level engineering-oriented neural network simulations. To allow these models to be used more widely and for more complex tasks, I lead an NIH-supported project to develop and maintain a general-purpose simulator for large, two-dimensional regions of cortex. The fundamental unit in the simulator is a set of cells called a Sheet arranged in a plane; users can define new Sheet and other component types and connect them with existing types into a complete model, using as much or as little biological detail as appropriate for a given research question. The goals are to help users quickly develop new models, compare them to each other, exchange them with other users, and validate them against experimental data.

Many of the components developed for the Topographica project are now also available for any researcher to use in other projects:

Param: user-controllable parameters for any Python program
ImaGen: resolution-independent streams of 0D, 1D, and 2D patterns for specifying, testing, and training models
Lancet: launching simulations covering a parameter space and collating the results
FeatureMapper: characterizing response to any type of test pattern, using tuning curves, receptive fields, or feature maps

(Complete info on the project is at topographica.org; papers include Frontiers in Neuroinformatics 2009, invited paper in Brains, Minds, & Media 2008, CNS 2003.)




This material is based upon work funded in part by grant 1R01-MH66991 from the Human Brain Project/Neuroinformatics program of the US National Institute of Mental Health, by grants IIS-9811478, IRI-9309273, IRI-940004P, and IRI-930005P of the US National Science Foundation, by grant EP/F500385/1 of the UK Engineering and Physical Sciences Research Council, and by grant BB/F529254/1 of the UK Biotechnology and Biological Sciences Research Council. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsoring organizations.