Sunday, 1 March 2015

More about SDR and brain memory

We saw we can represent a neuron memory layer as a matrix like the following:

[
  [a11, a12, ... , a1n],
  [a21, a22, ... , a2n],
  .
  .
  .
  [am1, am2, ... , amn]
]

And since it is a bit difficult to show it as an m × n matrix in web we prefer to use the following one row form:

[a11, a12, ... , a1n, an+1, an+2,... , am×n]

Human brain stores information in the form of sparse storage, not dense. So you have for example a 10,000 bit of information which only small portion of them, perhaps maximum 5% are active. Each neuron represents a bit of information.



The difference between an ordinary dense storage and human's sparse storage is that in a dense storage if you change a bit of information you probably can't figure out what was the first content at all. The new content probably has no similarity with the first one. In fact, the whole sequence of the content of a memory shows a thing.

But the way brain stores information in the sparse form is that each bit of information has a meaning and represents something for example for the following memory may show that the thing it refers to is a square, dark, has a curved edge, and ...

[0, 0, 1, ... , 0, 0, 1, 0, 0, 1, ... , 0, 0, 0, 1, 0, 0, 0]

An example of how distributed representation means
in neuron networks to store information in our brain.
As we talked before, it turns out that the memory storage in our brain has a form we call it SDR. It is sparse, like the previous example but it is also distributed. It means for a concept or meaning there is more than one neuron available to represent it. In fact, the network of concepts and neurons has a many to many relations. Look at this picture.

So for the previous example if we have said 30 active bits which defines the thing we think about it, and for some reason if we lose one of the active bits or one of the zero bits suddenly gets active, the new thing we are thinking about is still something like the main thing at first place, in a way we can say the difference between the main and changed object we think about is about 3% (1/30 or 1/31). So  the above memory vs this one which gets the third bit 0 so has similarity of 29/30 ~ 97%:

[0, 0, 1, ... , 0, 0, 1, 0, 0, 1, ... , 0, 0, 0, 1, 0, 0, 0]
[0, 0, 0, ... , 0, 0, 1, 0, 0, 1, ... , 0, 0, 0, 1, 0, 0, 0]

or vs this one which gets the second bit active so has similarity of 30/31 ~ 97%:

[0, 0, 1, ... , 0, 0, 1, 0, 0, 1, ... , 0, 0, 0, 1, 0, 0, 0]
[0, 1, 1, ... , 0, 0, 1, 0, 0, 1, ... , 0, 0, 0, 1, 0, 0, 0]

In fact, this is the reason why we human can easily recognize objects. Even if it is the first time you see a revolutionary brand new cell phone, at a glance you'll guess it must be a cell phone. Because you have stored many features of a cell phone in your memory and by changing some of them and adding some new features you can still be pretty sure that this new object is a cell phone.

But what we usually do in our software applications we give a dense code to objects, like 100 for Object1, 101 for Object2, 103 for Object3, ... and now if just one bit gets changed in 100 it can get converted to 102 or 101 or 68 or 108 or 228 ... representing totally some not related object.

Hierarchical Memory
The neocortex we talked about has a hierarchy which provides us concepts hierarchy. It means at lowest level you may just recognize some elementary information like, lines, dots, ... then you may understand edges,  curves, ... then small objects like eyes, ears, nose, ..., then names like Bob, John, ... and so on. Look at the following picture, it clearly describes what i tried to say:


Schematic of a hierarchical sequence of categorical representations processing a face input stimulus. 

So the sensory neurons give us just some basic information and this is the hierarchical network of neurons we have that gives the meaning to the things our senses experience every second.