During this talk I will review the workings of the Sparse Distributed
Memory (SDM) - an associative network model which was developed by
Pentti Kanerva in the 1980's (1). I will also review a number of "new
and improved" revisions of the original model (including one of my own
(2)) which overcome some of the disadvantages of the original
approach.
The SDM was initially proposed as a new method of storing and
retrieving long binary patterns. In the model, a number of nodes
(e.g. 1000) are randomly distributed throughout a high-dimensional
input space. Each input pattern maps to a specific point in this
space. It is extremely unlikely that there will be a memory node at
this location. Instead a copy of the signal is sent to every node
that is located within a certain Hamming distance. As additional
patterns are presented to the network, each node will store more than
one input pattern. When retrieving an input pattern from the network,
the contents of all nearby nodes are accessed. The copy of the
original input pattern is retrieved along with a noise term due to the
other stored patterns. If these patterns are random and binary, the
noise term should have zero amplitude, and the original signal can be
retrieved. However, the performance of the original SDM degrades
rapidly if non-random binary patterns are used, or if the length of
the input pattern changes. The revised models are able to overcome
these and other drawbacks of the original SDM.
(1) "The Sparse Distibuted Memory:. Pentti Kanerva, MIT Press. 1988.
(2) "A New Approach to Kanerva's Sparse Distributed Memory". Tim Hely
and David Willshaw, IEEE Transactions on Neural Networks, 8:791-794,
1997.