A widely quoted (and more useful) example is the Kohonen Phonetic
Typewriter.
A 2D array of neurons, each of which have 15 inputs, is fed the Fourier
coefficients of a speech signal that is sampled every 9.83msec.
The spoken words are a stream of phonemes around which the samples
cluster; after training in exactly the same way as the simple example
above, the neurons in the network come to represent these phonemes.
A test signal can then be played through the network; it
will spend most of its time in the neighbourhood of the network
phonemes that make up the unknown speech.
The nature of natural
language means that some phoneme sets are very hard to
disambiguate, and subsidiary networks are used to analyse these.
The idea has been demonstrated using the Finnish and Japanese languages.
It must be stressed that the Kohonen network is learning these
representations in an entirely unsupervised manner.