Sunday, January 16, 2011

AI - Where to begin.... maybe

Note: if you believe I'm wrong, please tell me how, post a link to info etc. Criticism is welcome if it is constructive or useful. What follows are my thoughts, not what I think the world must accept.

There are many people who think and work at understanding how our mammalian brains work. Some approach the problem by trying to figure out what each neuron is doing. Others attempt to find which region of the brain is active for any particular activity. Sorry, there is no region of the brain dedicated to just thoughts of Pamela Anderson. There are regions that activate when we think of her, but they are multipurpose and get used for other things and other people too.

NOTE: The mammalian brain is multi-functional. That is, much of it is dedicated or programmed for this task or that, but can be put to use for other things if the situation calls for it. Paraplegics do not simply stop using the parts of their brains which previously were used for controlling their limbs and dealing with nerve signals from those limbs. The extent of brain re-use is (To My Knowledge) unknown.

Remember that TMK part, I'll use it again.

We have found regions of the brain that are sort of hardwired for certain purposes. There are many websites to tell you what the human brain map looks like. When you look at it, it begins to look a lot like a machine - this part does that, that part does this and so on. Why should it be that it is divided up like this? The brain seems to be made up of cells which are not much different than any of the other cells in the brain, so why should this area be different in usage from that area? How did it get that way?

  • The simple answer to how is 'evolution'
  • The simple answer to why is 'accident'
Evolution is essentially an accidental needs based development system. A mutation occurs and may hang around for many generations before another mutation happens which makes the first mutation useful to survival. Once that happens the organism now has a 'need' for that mutation. A need that was generated accidentally. There is no design. The human brain is subject to evolution and this accidental needs based construction. That is to say that what our brains are today can relatively be said to be an accident. 

If we examine the brains of very small mammals we find that they are not much different than our own, but at the same time we know that size does make a difference. In fact, the energy used by and dedicated to brain function makes a HUGE difference. The current trouble is that we don't understand all the reasons this is so. A bigger organism has a bigger brain but not necessarily more intelligence. The reason for this is still in question. The number of neurons or their speed would be some reasons. Additionally I believe that the energy available to the brain has a big impact. No matter how large it is, if the brain does not have the energy it will not function to its possible best. Note that when you are tired or hungry your brain is suffering from a lack of energy - no energy = bad performance.


To be more intelligent we (it would seem) need only to feed more energy to our brains. This is partially true in that more brain function will require more energy. The trouble is that the body is a system, not simply interconnected units. To feed the brain more energy and have it use that energy requires fine tuning of the entire system, not simply taking a brain energy pill. Saying it that way doesn't even begin to give credence to the systems which support our brains. Generally they are optimized already and pushing them harder will cause a break down somewhere else or simply fail to work. This optimization takes into account the fact that other parts of the supporting systems may in fact not be optimal.

Before continuing, lets look at what intelligence is... as far as we know:


In effect we can boil it down to this. The body is a support system for the brain. Intelligence does not require a body or even 5 sensory inputs - see Stephan Hawking and Helen Keller for examples of why I say that. Intelligence exists within the brain.

I think of intelligence as the ability and capacity to acquire, manipulate, and consume information. I don't think you can make it simpler than that. If you make it more complex then you are trying to apply this simple definition to some task or group of tasks.

Given this very small definition of intelligence we can take it and theorize other things from this which may or may not follow traditional thoughts on intelligence:

  • Intelligence is useless without information
  • Intelligence is not information
  • Intelligence without new and changing information is 'probably' useless
In those three sentences we have defined intelligence as a process rather than a possession or object. Now we can apply that process to other organisms and systems free of clutter that task requirements of humans implies. Intelligence does not require abstract thought as an example, though such a capability is without doubt very useful. Intelligence requires only new and changing information to function. Clearly we see unintelligent humans who still function as organisms, or rather humans which do not demonstrate intelligence such as coma victims.

On that last statement, lets be clear that the mammalian brain has multiple functions. Survival and body operations are not part of the intelligence of mammals. They are independent of what we would think of as human intelligence as can be seen in comatose patients. Already we can begin to unravel what intelligence is not by looking at intelligent systems which are broken or operating outside of normal conditions, where normal is arguably difficult to define. Just the same, we can draw generalized conclusions from limited information about an unknown system if we have enough observational evidence... thus giving us a 'normal' operating set of parameters.

Lets add one more word to our basic definition of intelligence: autonomy.

Mammals are examples of autonomous intelligence but we also have examples of non-autonomous intelligence. The simple thermostat in your house is a very simple non-autonomous intelligent system. It acquires, manipulates, and consumes information. The fact that it's existence and form are fixed and dependent on the information that it uses removes any autonomy from this system. Most people will say that it is not intelligent, but at the very basic definitions of intelligence it is. Perhaps you feel this is setting the bar too low for the task, but I argue otherwise. Your body is made up of trillions of simple cells. Their combined impact on the world shows autonomous intelligence. Do you control the white blood cells in your body? Do they act intelligently? They acquire and consume information about bad stuff in your body. They do not attack just anything. They are not autonomous intelligent systems simply because their capacity to process information is harshly limited in scope.

We can see that autonomy in intelligent systems requires a system which is adaptable and not reliant on the information it consumes for its purpose. That is to say it is not restricted to specific information. A white blood cell as part of the immune system will help cause organ rejection. The scope of the information consumable by the white blood cell is very restricted. There is no autonomy when function is reliant on the information consumed.

I have defined intelligence as a process of information usage. Further, I have defined two basic types of intelligence; autonomous and non-autonomous processes. The question now is how to apply that to what we believe we know of intelligence, however we otherwise define it. Many of us will be unwilling to accept that non-autonomous intelligence qualifies as intelligence, yet I argue that it is. This simple definition of intelligence leaves us free to examine intelligence at all levels of complexity. 

In a single thought, we can see the scope of this task. The mammalian eye: it comes in many shapes and sizes yet the information it provides the brain is used for much the same things in all mammals. We also know that this information is consumed in varying ways and to varying degrees. Predators have different vision systems than most prey. Some prey animals have independent eye movement and vision processing in order to better avoid being 'dinner'. Human intelligence can function with good or bad vision and is capable of compensating for such where non-autonomous intelligence would fail. This is a problem for robotics and AI in general. Simply sticking non-autonomous intelligent systems together does not make an autonomous intelligent system. There seems to be something more required.

This brings us back to the third part of my hypothesis: Intelligent systems without new and changing information are probably useless. The other side of this coin is that a system which cannot consume new and changing information outside the normal scope is non-autonomous intelligence. 

It can be generally agreed that the ultimate goal of AI is autonomous intelligence.

How to make the leap to autonomous intelligence

This is where the magic happens. Information Storage. If you have to burn your finger every time you see a hot stove, you need a storage upgrade or a better memory retrieval system. This demonstrates why operating only on current sensory inputs equates an autonomous intelligent system to a complex but non-autonomous intelligent system.

This is a serious problem for AI scientists. Not only does our system have to be intelligent and with variable scope, but it has to be able to store implied information (or meta data) about previous actions. In simplistic terms we humans learn that red is the color of hot and often results in pain. The implication to be stored is that red=pain. Over time we learn this is not always true, but we will remain cautious around red things. Here is the problem. How to remember that red equals pain. Sounds simple to us, but break that down to the simple definition intelligence.  We further refine the meta data so that when sensory data includes not only the color red, but the sense of heat it usually will mean pain if touched. So now we have red+heat=pain. You can imagine the many iterations of storing meta data regarding red, heat, objects description, and pain. That part about the many iterations is the variable scope of our autonomous intelligence in this situation. Without that we would get burned a lot. We apply this meta data to the shape and color of flames, and no matter how fake the flames are we still associate danger of  pain with them... forever. In essence, we store that meta data for long term re-use. We don't store the exact color of red or shape of flame, but generalities of them. Take a moment and describe the color of a hot electrical burner on your stove, or the color of a flame on a candle. Not easy is it, but you would definitely recognize either if you saw them and all that their existence implies from stored meta data about them. I want to give this association or meta data a name. Memory is not really a good word for it. It is a rule based on implied properties of specific information, in this case sensory input.

We can say that variable scope autonomous intelligence must be able to create, store, and use meta rules. This implies another simple aspect of autonomous intelligence: most memory storage includes meta data from all available information. Further, through repetition this information is formulated into a rule for future observation of such grouped information. This might be called learning, yet at the base levels it is simply information relationships and storage of the meta data about this information.

Now what we need  is a mechanism to apply every meta rule to every piece of new information. We're either going to need a very big computer or a lot of small computers working harmoniously in parallel.
Speed of information retrieval and rule application is clearly a problem. We're going to need a pretty special mechanism to accomplish suitable performance for this function. The more we 'learn' the more difficult this task will become. This is not just a problem for robotics and AI, it's a real problem for humans. If you are unsure what I mean, pretend your life depends on passing the algebra exam tomorrow, you know the one, the one you took in 10th grade.

In the way that flying is easy, building AI is easy. In the former simply throw yourself at the ground and miss. In the latter, just figure out how to remember all the information you acquire and build rules from it to then apply to every new piece of information.

AI is easy, right?

No comments: