The LISSOM family of self-organizing computational models (Bednar, Choe, Miikkulainen, and Sirosh 1994-2004) aims to replicate the detailed development of the visual cortex of humans. The fundamental thesis is that the cortex organizes itself using general learning rules to capture correlations in both visual inputs and internally generated sources of activation. The learning rules consist of simple changes in the strengths of feedforward and lateral connections between neurons, and thus their biological implementation is straightforward. In addition, the model has been shown to exhibit many of the same features found in human and experimental animal cortex.
The original inspiration for these models was the SOM algorithm (self-organizing map; Kohonen 1982) widely used for data visualization. RF-LISSOM (Sirosh and Miikkulainen 1994) extends SOM to be more powerful and more biologically realistic by using Hebbian learning and by including lateral connections between neurons. RF-LISSOM models only the primary visual cortex (V1), but CRF-LISSOM (Bednar and Miikkulainen 1999) extended it by adding a model of the processing in the retina and the lateral geniculate nucleus. This extension allows the model to work with natural image stimuli. HLISSOM (Bednar and Miikkulainen 2001) further extends CRF-LISSOM to include cortical areas beyond V1, allowing it to explain perception of objects (e.g. faces) as well as low-level image features. Thus HLISSOM is a model of the mammalian visual system, not just V1. Nowadays, all of these models are jointly called LISSOM.
To learn more about LISSOM, the best introduction is to read our recent book from Springer. If that is not practical, two good but somewhat dated starting points are a 1996 HTML book article (easy to read online) and a 1997 book chapter (monochrome and easy to print out). My PhD thesis is a more up-to-date reference that includes HLISSOM, but is superseded by the book. Older publications are available at the UT NN group website. If you have questions about the biological underpinnings, check out our FAQ. Finally, the LISSOM software is freely available, and so everyone is welcome to run their own LISSOM simulations and to develop new models based on LISSOM. There are two versions of the software available: the standalone C++ version will allow you to replicate the existing publications exactly, but is difficult to extend for any new simulations, while the the Topographica-based version is well suited to future work but does not always have exactly the same features as the C++ version. In either case, tutorials are available as a good starting point for future work.
The logo animation above shows how the response of an RF-LISSOM orientation map to the input pattern "RF-LISSOM" changes during self-organization. When it reaches the end, the animation plays in reverse down to iteration zero, then repeats again. Inactive neurons are shown as black in the image, while active neurons are colored white or other bright colors. For active neurons, the saturation of the color indicates how selective that neuron is for orientation -- a highly selective neuron is brightly colored (according to the orientation key at the right), while an unselective neuron appears white.
At the beginning of self-organization, each neuron is essentially unselective for orientation, so the neural response is almost white. The neural response is also broad and unspecific. As the network sees more oriented training patterns, neurons become more and more strongly colored, because they are becoming more selective for orientation. By the end of self-organization at 20000 iterations, neurons are strongly selective for orientation, and nearly all those responding to the letter images are brightly colored. By studying the orientation key one can verify that the actual colors present for each stroke in the letters (e.g. the vertical and horizontal lines for the "L") correspond to the orientations of those strokes. Thus after self-organization, the model cortex is representing the orientation of each local segment of the input image as well as the overall position and shape of the input.
For more information about orientation perception in the model, see the spinning RF-LISSOM orientation demo.
|firstname.lastname@example.org||Last update: 29 Aug 2013|