Amos Storkey

Hopfield Networks

Why Hopfield networks?

Hopfield Networks are certainly nothing new in the field of neural networks. The amount of research which involves Hopfield-like networks has reduced substantially over the last decade. It can be argued that Hopfield networks do not have any natural applications. Some have argued that they make associative memories, or are an optimisation tool. However they are relatively inefficient, unreliable and unprincipled at both these tasks.

Others argue that Hopfield networks form solvable models and analogies of brain activity. However the cost of the simplicity of the Hopfield model is a very poor representation of anything like real neural behaviour. Yes, analogies provided by Hopfield models can be helpful, but they rarely progress to anything more concrete.

So why might anyone be interested in Hopfield networks? Well the simple answer is that they provide some hard but solvable puzzles. The similarity of stochastic Hopfield systems to spin-glasses in physics has meant that many of the techniques of that field have be used to characterise something of the structure of the Hopfield network. And this structure is very interesting. Hence most of the research in Hopfield networks has been motivated by an interest in the dynamical or stochastic systems which Hopfield networks represent, and an interest in the techniques that are used to solve them.

Does this mean that research into Hopfield networks is esoteric and of no practical use? Not entirely no. As is often the case with the study of certain systems, the structures and techniques used with Hopfield networks can crop up in a host of different places, from the use of chi-squared process models to learning methods in adaptations of hidden markov models. These methods work within an appropriate Bayesian framework, and do not follow the original ad-hoc form of the Hopfield associative memory. This is work in progress.

Here is an outline of old research in Hopfield networks.

New learning rules

A new class of learning rules for Hopfield networks, which have high capacity and functionality. They also have palimpsest properties. Palimpsest rules have interesting learning properties. The forget old memories in order to learn new ones. These learning rules can be analysed in terms of iterated function systems.

Capacity issues

Ways of looking at the capacity of the new learning rules.


 
 
 
© Amos Storkey 2000-2005.