Wednesday, February 23, 2011

More Than Synapses Snapping Between Your Ears!

In a previous post I 'rambled' about wetware Finite State Machines. This is related to information that synaptic efficiency or perhaps even function is effected not simply by 'firing' but the speed and effect of the firing as well as external magnetic interference and other factors. In fact, the brain itself is generating magnetic/electric fields which affect neurons farther away than those connected directly. This means that each neuron has more than a single input, and it's likely that the several inputs have levels other than on/off associated with them.

Sounds interesting enough, but there is not much published work that I can find. That said, it does not mean there is no apropos information. I found a link to a paper today with information about a study of the human the brain while active cellular phones are pressed against the head. This is an active source of electric fields introduced to the brain. There are many folk involved, but their conclusion goes like so:

Conclusions In healthy participants and compared with no exposure, 50-minute cell phone exposure was associated with increased brain glucose metabolism in the region closest to the antenna. This finding is of unknown clinical significance.
In this post I'm suggesting a possible significance for this finding. Imagine if this indicates that holding your phone to your left ear while driving makes your right foot, leg, and arm/hand response time slower?

Whether this indicates over stimulation, higher random noise, productive or non-productive activity is unknown. That it shows activity due to electromagnetic radiation is important. This supports, however incidentally, the theories proposed by Michael Persinger and his 'god helmet'. Any evidence that the spiritual is an artifact of the human brain is further evidence that the human brain, while complex, is fully responsible for our perceived realities. This is true whether we perceive physical or metaphysical things. The lack of 'any idea what a proper control should be' in such studies may explain the uncertainty of Persinger's claims. That is to say that since we do not know what electromagnetic radiation does to neuronal activity, controls would be difficult and in fact may vary from subject to subject.


The reason this is important to me is that to truly arrive at AI, it must be possible to recreate some or all of the innate, subconscious, mechanisms which give rise to human 'intelligence' via man-made machinery. If there is a spiritual component we are not going to see AI like Data in Star Trek. That would be a true shame.

Sure, all that I've done here is string together some thoughts. There is no real math involved, nor any particularly good references. That's the joy of having a blog to jot down my thoughts. I can come back later and fill in some details. I do think these are solid indicators that the brain functions on more than simple synapse firings. The findings that some animals have a sense of magnetic fields show that what we know about brains is actually very little. It would seem plausible that humans can in some way perceive electromagnetic fields.

UPDATE: We have another winner. It would appear that Loggerhead Turtles can circumnavigate the Earth's oceans using a wetware positioning system based on the Earth's magnetic field. The story in Discovery on-line tells how the turtles 'see' the world a bit differently than we humans. There are quite a few discoveries showing this type of sense. Perhaps it's MUCH more common than we think it is.

 If we imagine that you have a form of synesthesia where north feels warm and south feel chilly or east feels 'bright' and west feels 'dark'  It would enable you to always be able to determine what direction you were heading. In fact, you only need a synesthesia feeling for one direction to be able to know which way you're facing all the time.

I hope and wait for a fundamentally more solid understanding of the mechanisms of the human/mammalian brain in the very near future.

Tuesday, February 22, 2011

The Clues Are Piling Up

Perhaps I'm a bit early in being excited, but the early conclusions from some research reported by BBC Earth News conducted by Professor John David Smith, from State University of New York at Buffalo and Michael Beran, from Georgia State University kind of make me a bit excited.

Their study uses macaque monkeys trained to press buttons according to pictures presented to them. The research finds that the old world macaques seem to know when they don't know the answer. That isn't simply best guess effort to try to get another piece of food. They actually do like humans and hesitate and will actually choose the 'I don't know' button if there is one.

Dr Smith explained: "There is a big theoretical question at stake here: Did [this type of cognition] develop only once in one line of the primates - emerging only in the line of Old World primates leading to apes and humans?"
I know that I'm sometimes quick to get excited about some discoveries but if this is correct, it means that humans inherited many of their cognitive abilities, if not all of them, from an earlier ancestor. That is to say that it's not something special just to humans. This is important because we want to know about the origin of our intelligence in the effort to create AI.

There is another supremely interesting effect. If it can be shown that we inherited much of our mental faculties, then early humans would more likely have been as intelligent as we are now, but in the nascent human civilization they lacked knowledge. We ARE learning machines, singly and collectively. This might go a long way to help explain how there are marvels older than written history which are often explained as requiring extraterrestrial life or intelligence to design and implement. I did say 'might' since much more information should and needs to be collected with regard to such ancient marvels. I firmly believe the thought that early humans were incapable of solving problems that modern humans only know how to solve with machines is simply naive, or at worst egocentric.

UPDATE: There is a story over at NPR about a burial site in Jordan where they found a 16000-ish year old burial where a fox was buried in much the same way we found dogs were buried with their owners much later. It's not reasonable to make concrete conclusions about whether foxes were 'pets' 16000 years ago, long before domesticated dogs. However there IS something that can be said of this finding which is appropriate here. Despite lack of modern knowledge, human behaviors were much like they are today if we had to live in that environment and do so without knowledge from the industrial revolution. I opine that we would end up being much the same as what we think they were. This is important because we need to judge what evolution has done for intelligence. It can be agreed that early humans were intelligent beings. They lacked modern knowledge yet still created wonderments of architecture, domesticated animals, began raising crops to survive winter, and many other things that we modern humans would do if we found ourselves in such a situation. It's possible that this argues for ancient astronauts but in my opinion it argues more strongly for the fact that we are learning machines and possessed the basics we now have from the beginning or  near to it. The more we learn, the more we refine our behaviors and technology. That we still possess many of those behaviors speaks to the fact that many are biologically based behaviors.

The evolution of the body of human knowledge is special because we so easily share and copy it as well as make improvements. If the ancient Egyptians had cement, imagine what other processes were known and later forgotten? Perhaps worse, how many were deemed acts or gifts of gods which were thrown away in favor of newer gods?

This discovery holds big promises.

And just a bit more to add to the notion that our brains are not mystical, but wetware machines. Over at Neurology.com is a report indicating that an active brain is more active in protecting itself from aging ailments such as Alzheimer's. This report is about research regarding people who have been multilingual for a large portion of their lives. This requires extra effort in several parts of the brain, leading to the idea that activity increases protection against the barnacles of aging. A poor analogy would be to compare this extra activity in the brain to doing regular maintenance on your car or building.

There have been other studies which show that activity protects the brain from aging problems. Clearly, to me, this indicates the machine qualities of the brain vs magic. AI is possible unless there is magic involved.

If you need more evidence that your brain is more machine than magic, spend a few minutes with this video about the differences between humans and chimpanzees.

Saturday, February 12, 2011

RoboEarth - coloring inside the lines

Full disclosure: I'm not a scientist by profession, nor in the robotics business. I do however have an interest in Artificial Intelligence.

With that said, I see the news about RoboEarth this week. They have a lot of pdf files to read if you're interested. They seem to be mostly top down discussions of how robots in strictly confined domains can cooperate. Think of robots working together in the same building. The word earth in the name seems a bit misleading. It seems more of a shared controller rather than anything like artificial intelligence or IMO even a way to make sophisticated robots cheaper.

It may well work in well defined domains but I don't think that it will scale at all when different domains or radically different robots are mixed together. It all sounds good at a 35,000 foot level, but getting robots to talk the same language is a bit more difficult than it would at first appear.

If we simply look at how humans communicate it is easy to demonstrate why a simplistic view of communications is silly... at best. Say you are driving a car and one of the passengers says 'hey, what is that thing over there?'

Unless you are in a desolate place where there is only ONE thing to look at aside from desolate flat earth, you will have to interpret what in the world the other person is talking about, if that is even possible. It gets even better: Suppose the passenger says 'what's that brown square thing' while you are driving through a city? Communications are difficult, even if you agree what words to use. Do you think I'm joking? Here is a task for you: using any dictionary you choose, describe how someone should tie their shoes without using any pictures, words only. When you are done, get someone to try them out and see how that works out for you.

Take this paragraph about Environments:
""Action recipes may rely on information about the environment, where actions
take place. Therefore, representing environments is part of the specification
of the RoboEarth language. It comprises describing the poses of objects and
pieces of furniture, but also learned knowledge such as the default location of
objects. Using symbolic representations for maps is not suited for uncertain
and dynamically changing environments, so that in this case the model of
the map and all associated sensory data will be saved in its binary form. In
order to create awareness of a map, its meta-data has to be detailed within
the RoboEarth language.""

I don't think this group has fully thought this through, especially since their system relies on the centralized part to make assumptions and decisions independent of the actual robot.

In a limited and well defined domain with strict compliance to format by the robots, yes this could be useful. It's not an answer to any real problems though. It's an answer looking for a question as far as the real world is concerned.

Just my thoughts... too much hype, not enough substance.

Thursday, February 10, 2011

Chemical Finite State Machines... CFSMs ?

(Editorial note: This post started on Jan 18, 2011 and remains largely unedited flow of consciousness rambling. A few notes are added to sort of make this worth posting. I did say sort of)


Dear MIT, Stanford, UC-Berkeley, CIT, UI-Urbana, GIT, UM, Cornell, CMU, UT Austin and others, (gah, missed out Caltech)



Understand that I have no funding, have never been published, am not considered even a hobbyist scientist, and probably won't be able to write this up well enough to get your attention. I DO have the ambition to try though. That probably means you'll regret reading further... you've been warned. (bajeebus, that's what the justice of the peace said to me!)
(Yes, we are aware that these ramblings are incriminating)


What I know about Finite State Machines and their many derivatives can be described as 'shady' at best. I do know enough to understand that those who understand them and the math involved are most likely from another galaxy. Oh, the basics are easy enough, but once  you get past that first page... phew! 0 to brain pain in less than 4 paragraphs.


If you consider the bulk of the rest of my posts (no I don't expect that actually happened) it's not seriously deranged to believe that I might ponder FSM mechanisms quite often. In view of 'human intelligence' I have pondered the smallest measure of intelligence and how to assemble that small measure into larger systems. I don't think it possible to do so without thinking of FSMs - even if you have no idea what one is. Given all that, a dichotomy seems to exist between human wetware and the FSM mechanisms we actually have access to. Human wetware is made of carbon and literally swimming in chemicals. Every cell of it is subject to a number of inputs which are not generally considered TMK. I am beginning to believe that we need to study carbon based FSM systems or model them more closely. 

(Nailed itttt !!! )


Yes, yes, I'll get to the point in a minute.


(Yes, there may be no point... sorry about that)



Now, there are FSM style systems which might attempt to function in the way that I'm thinking, but I do not believe that they will perform in the same ways. I'm talking about the difference between using a prius to test a race track and a McClaren F1 to test it. Design and material differences limit the efficacy of testing the manufactured 'wetware' like machines. That would mean that we need to simulate some test of such a system. This requires carbon based wetware to be inserted into the test system in some way. Looking around us we see several examples of such systems which we might be able to use... if we have enough hardware, coordination, and ambition to use them for such observational experimentation. To name a few:


People driving cars on a highway
People in a large city who are walking
People in a sport stadium
People in a shopping mall


The Key component is people. No, not Soilent Green. People have the wetware we need. If we can capture their behaviors in FSM type situations where 'intelligence' is not the major driving force behind their behavior, we should be able to draw generalized understandings of the myriad inputs and states of carbon based FSM sytems. In fact, the math should be similar - I think, but we introduce myriad and seemingly random input variations to each switch in the FSM. The idea being that there is carbon based chemically reactive wetware running the switch and controlling the state based on inputs which are not necessarily fixed or known. We can introduce energy levels to the machine at a cellular granularity (more or less)!!! think about it.


Ok, it's not perfect, wildly uncontrolled, and we can't measure all the inputs to any (never mind all) the switches in the machine but we CAN observe the effects the machine creates in response to generally understood inputs. Yes, at this point I agree with you. I probably have no idea what I'm talking about.


The point or thought here is to observe the 'information' moving through the machine rather than each element of the machine itself. It is possible that by trying to measure the exact level of the noise we lose the data. I mean that in the nicest of way, of course. Ok, yeah, I'm losing it too. The information in this system is the overall goal or purpose of the system under test. Search for 'how to study traffic patterns' and you'll find pages that talk about improving throughput ... yet none of them address the myriad (I like that word) inputs to the system that affect its efficacy, and only rely on bandwidth studies without really appreciating the data being put through as it were. If this is sounding like binary feng shui, you might be right. Using some kind of feng shui on traffic throughput might address the myriad inputs that are typically ignored. Where is the study of disruptive billboard ads on traffic patterns.


Wow, that was lots of rambling. So what can that mean to my other thoughts? Ah yes... back to intelligence. Random inputs to an understood system will produce random and not understood results. Despite that, when a generalized system can be categorized, we should be able to formulate understandings of information flow and more importantly, information usage, even in environments of randomized and not understood inputs. Gestalt is a good word here.


Example of how this works: how do you decide which of the 240 channels on your cable system to watch when you have come in from the cold and smell something that reminds you of your grandmother's cookies? How much influence does 'cold' have on the system? How much influence does the smell of a familiar event have on the system? What weight do they place on meta rules in the system? These are the problems which have to be addressed in order to understand what intelligence is and what it is NOT. Yes, an autonomous intelligent system is subject to this kind of non-random randomness. Every input of information is weighed against stored meta rules and other information. So when it's cold and you smell your grandmother's cookies, what channel would you watch?







Wednesday, February 9, 2011

I'm not surprised at all

Over at Robots.net is a report that the brain doesn't work simply by firing synapses. I mention the link at Robots.net because I like that site. They have a link to the original Caltech article.

I'm not surprised by this news at all. Here and there in my meandering thoughts you'll find a few that sort of dwell on this. The problem of simply mapping a group of signals to an individual memory cannot explain how we have such a large capacity for memory. It doesn't even come close to explaining Kim Peek's abilities. Mr Peek's abilities far and away outstrip our imaginings of how to store that much data in synapse activity alone. There had to be more to it and I'm betting this is merely a nice top view of a larger iceberg of information about the human brain.

If you can 'influence' the state of a set of synapses through chemical and electromagnetic properties that set may indeed have millions of states; all of which are valid and linked to other states. It would not be unreasonable to also think it may be problematic to measure those states accurately without influencing them as well.

Why do I think this is important? 

I think this new information is important because the efforts currently underway to mimic the mammalian brain synaptic constructs is doomed to fail. I'm sure we'll learn something important from such efforts, but the main lesson will be that this kind of mimicry will fail really spectacularly. You can assemble all the parts of a vehicle in many ways, not that many of them actually produce a useful vehicle.

If you arbitrarily say that a clump of neurons can have 1 billion synaptic states. Once you also allow electromagnetic and chemical signaling inputs the state table multiplies by magnitudes of order.

I'm also reasonably certain that the principle of 'failed system analysis' shows that we can see more of how things work when they're broken than when they are not. Take for instance the condition of 'depression' which is a prime example of something that seems to have a chemical based explanation. We use chemicals to attempt to alter synaptic activity. Some times it works, some times it doesn't and such chemicals can induce wild and dangerous undesired effects. I predict that we'll find out current chemical treatments for depression are much like pouring chlorine in the gas tank of the 10 year old family minivan because it seems to have a positive effect on some vehicles.

On a final note, in science fiction stories we've seen authors suggest sleeping arrangements which emit chemical vapors which help induce sleep and restful states as well as machinery which electromagnetically induces sleep states. Perhaps science fiction writers are still ahead of the game in some areas?

Of course we should prepare for the onslaught of claims that cell phone towers are causing brain damage all over again.

Saturday, February 5, 2011

I feel somewhat vindicated... wow

Wow, all I can say is that this podcast with Natasha Mitchell interviewing Nobel laureate Gerald Edelman justifies what I was saying my post on the beginnings of AI and what the structure of intelligence might actually be: a process. Oh, I can also say that I'm rethinking everything in the way that I'm wont to do when coincidence lays down a similar idea for me to ingest. The mammalian brain and more specifically, the human brain, has the 'wetware' to run the process of 'intelligence' which other mammalian brains do not yet seem to have. I use the cautious word 'not yet seem to have' because I'm not quite convinced that all other animals do not have it. Helen Keller could not speak at all for many years yet possessed the machinery. Apes do not speak but perhaps possess the rudimentary machinery for it sans actual vocal chords and hardware to create human speech.

Apparently, Edelman does not give many interviews, so you'd probably better go listen to this one. I've never wanted to be part of or live in a monastery, but his scientific monastery sounds like a good place to spend a few years, not that I'm qualified. It consists of 40 really smart people working together doing whatever they think is important to do. They are studying the genetics of sleep now that they have shown that insects actually do sleep. How cool is that?

In this interview he talks about some of the differences between 'intelligent' mammals and humans based on consciousness, creativity and what he calls 'neural Darwinism'. Yes, the title didn't really give it away for me either, so listen in for the explanation. It's good stuff. Hopefully his 'monks' will figure out why it is that we sleep because I think that is extremely important to AI.

Note that his thoughts on consciousness seem to preclude the singularity as many imagine it might come to be.

Sunday, January 16, 2011

AI - Where to begin.... maybe

Note: if you believe I'm wrong, please tell me how, post a link to info etc. Criticism is welcome if it is constructive or useful. What follows are my thoughts, not what I think the world must accept.

There are many people who think and work at understanding how our mammalian brains work. Some approach the problem by trying to figure out what each neuron is doing. Others attempt to find which region of the brain is active for any particular activity. Sorry, there is no region of the brain dedicated to just thoughts of Pamela Anderson. There are regions that activate when we think of her, but they are multipurpose and get used for other things and other people too.

NOTE: The mammalian brain is multi-functional. That is, much of it is dedicated or programmed for this task or that, but can be put to use for other things if the situation calls for it. Paraplegics do not simply stop using the parts of their brains which previously were used for controlling their limbs and dealing with nerve signals from those limbs. The extent of brain re-use is (To My Knowledge) unknown.

Remember that TMK part, I'll use it again.

We have found regions of the brain that are sort of hardwired for certain purposes. There are many websites to tell you what the human brain map looks like. When you look at it, it begins to look a lot like a machine - this part does that, that part does this and so on. Why should it be that it is divided up like this? The brain seems to be made up of cells which are not much different than any of the other cells in the brain, so why should this area be different in usage from that area? How did it get that way?

  • The simple answer to how is 'evolution'
  • The simple answer to why is 'accident'
Evolution is essentially an accidental needs based development system. A mutation occurs and may hang around for many generations before another mutation happens which makes the first mutation useful to survival. Once that happens the organism now has a 'need' for that mutation. A need that was generated accidentally. There is no design. The human brain is subject to evolution and this accidental needs based construction. That is to say that what our brains are today can relatively be said to be an accident. 

If we examine the brains of very small mammals we find that they are not much different than our own, but at the same time we know that size does make a difference. In fact, the energy used by and dedicated to brain function makes a HUGE difference. The current trouble is that we don't understand all the reasons this is so. A bigger organism has a bigger brain but not necessarily more intelligence. The reason for this is still in question. The number of neurons or their speed would be some reasons. Additionally I believe that the energy available to the brain has a big impact. No matter how large it is, if the brain does not have the energy it will not function to its possible best. Note that when you are tired or hungry your brain is suffering from a lack of energy - no energy = bad performance.


To be more intelligent we (it would seem) need only to feed more energy to our brains. This is partially true in that more brain function will require more energy. The trouble is that the body is a system, not simply interconnected units. To feed the brain more energy and have it use that energy requires fine tuning of the entire system, not simply taking a brain energy pill. Saying it that way doesn't even begin to give credence to the systems which support our brains. Generally they are optimized already and pushing them harder will cause a break down somewhere else or simply fail to work. This optimization takes into account the fact that other parts of the supporting systems may in fact not be optimal.

Before continuing, lets look at what intelligence is... as far as we know:


In effect we can boil it down to this. The body is a support system for the brain. Intelligence does not require a body or even 5 sensory inputs - see Stephan Hawking and Helen Keller for examples of why I say that. Intelligence exists within the brain.

I think of intelligence as the ability and capacity to acquire, manipulate, and consume information. I don't think you can make it simpler than that. If you make it more complex then you are trying to apply this simple definition to some task or group of tasks.

Given this very small definition of intelligence we can take it and theorize other things from this which may or may not follow traditional thoughts on intelligence:

  • Intelligence is useless without information
  • Intelligence is not information
  • Intelligence without new and changing information is 'probably' useless
In those three sentences we have defined intelligence as a process rather than a possession or object. Now we can apply that process to other organisms and systems free of clutter that task requirements of humans implies. Intelligence does not require abstract thought as an example, though such a capability is without doubt very useful. Intelligence requires only new and changing information to function. Clearly we see unintelligent humans who still function as organisms, or rather humans which do not demonstrate intelligence such as coma victims.

On that last statement, lets be clear that the mammalian brain has multiple functions. Survival and body operations are not part of the intelligence of mammals. They are independent of what we would think of as human intelligence as can be seen in comatose patients. Already we can begin to unravel what intelligence is not by looking at intelligent systems which are broken or operating outside of normal conditions, where normal is arguably difficult to define. Just the same, we can draw generalized conclusions from limited information about an unknown system if we have enough observational evidence... thus giving us a 'normal' operating set of parameters.

Lets add one more word to our basic definition of intelligence: autonomy.

Mammals are examples of autonomous intelligence but we also have examples of non-autonomous intelligence. The simple thermostat in your house is a very simple non-autonomous intelligent system. It acquires, manipulates, and consumes information. The fact that it's existence and form are fixed and dependent on the information that it uses removes any autonomy from this system. Most people will say that it is not intelligent, but at the very basic definitions of intelligence it is. Perhaps you feel this is setting the bar too low for the task, but I argue otherwise. Your body is made up of trillions of simple cells. Their combined impact on the world shows autonomous intelligence. Do you control the white blood cells in your body? Do they act intelligently? They acquire and consume information about bad stuff in your body. They do not attack just anything. They are not autonomous intelligent systems simply because their capacity to process information is harshly limited in scope.

We can see that autonomy in intelligent systems requires a system which is adaptable and not reliant on the information it consumes for its purpose. That is to say it is not restricted to specific information. A white blood cell as part of the immune system will help cause organ rejection. The scope of the information consumable by the white blood cell is very restricted. There is no autonomy when function is reliant on the information consumed.

I have defined intelligence as a process of information usage. Further, I have defined two basic types of intelligence; autonomous and non-autonomous processes. The question now is how to apply that to what we believe we know of intelligence, however we otherwise define it. Many of us will be unwilling to accept that non-autonomous intelligence qualifies as intelligence, yet I argue that it is. This simple definition of intelligence leaves us free to examine intelligence at all levels of complexity. 

In a single thought, we can see the scope of this task. The mammalian eye: it comes in many shapes and sizes yet the information it provides the brain is used for much the same things in all mammals. We also know that this information is consumed in varying ways and to varying degrees. Predators have different vision systems than most prey. Some prey animals have independent eye movement and vision processing in order to better avoid being 'dinner'. Human intelligence can function with good or bad vision and is capable of compensating for such where non-autonomous intelligence would fail. This is a problem for robotics and AI in general. Simply sticking non-autonomous intelligent systems together does not make an autonomous intelligent system. There seems to be something more required.

This brings us back to the third part of my hypothesis: Intelligent systems without new and changing information are probably useless. The other side of this coin is that a system which cannot consume new and changing information outside the normal scope is non-autonomous intelligence. 

It can be generally agreed that the ultimate goal of AI is autonomous intelligence.

How to make the leap to autonomous intelligence

This is where the magic happens. Information Storage. If you have to burn your finger every time you see a hot stove, you need a storage upgrade or a better memory retrieval system. This demonstrates why operating only on current sensory inputs equates an autonomous intelligent system to a complex but non-autonomous intelligent system.

This is a serious problem for AI scientists. Not only does our system have to be intelligent and with variable scope, but it has to be able to store implied information (or meta data) about previous actions. In simplistic terms we humans learn that red is the color of hot and often results in pain. The implication to be stored is that red=pain. Over time we learn this is not always true, but we will remain cautious around red things. Here is the problem. How to remember that red equals pain. Sounds simple to us, but break that down to the simple definition intelligence.  We further refine the meta data so that when sensory data includes not only the color red, but the sense of heat it usually will mean pain if touched. So now we have red+heat=pain. You can imagine the many iterations of storing meta data regarding red, heat, objects description, and pain. That part about the many iterations is the variable scope of our autonomous intelligence in this situation. Without that we would get burned a lot. We apply this meta data to the shape and color of flames, and no matter how fake the flames are we still associate danger of  pain with them... forever. In essence, we store that meta data for long term re-use. We don't store the exact color of red or shape of flame, but generalities of them. Take a moment and describe the color of a hot electrical burner on your stove, or the color of a flame on a candle. Not easy is it, but you would definitely recognize either if you saw them and all that their existence implies from stored meta data about them. I want to give this association or meta data a name. Memory is not really a good word for it. It is a rule based on implied properties of specific information, in this case sensory input.

We can say that variable scope autonomous intelligence must be able to create, store, and use meta rules. This implies another simple aspect of autonomous intelligence: most memory storage includes meta data from all available information. Further, through repetition this information is formulated into a rule for future observation of such grouped information. This might be called learning, yet at the base levels it is simply information relationships and storage of the meta data about this information.

Now what we need  is a mechanism to apply every meta rule to every piece of new information. We're either going to need a very big computer or a lot of small computers working harmoniously in parallel.
Speed of information retrieval and rule application is clearly a problem. We're going to need a pretty special mechanism to accomplish suitable performance for this function. The more we 'learn' the more difficult this task will become. This is not just a problem for robotics and AI, it's a real problem for humans. If you are unsure what I mean, pretend your life depends on passing the algebra exam tomorrow, you know the one, the one you took in 10th grade.

In the way that flying is easy, building AI is easy. In the former simply throw yourself at the ground and miss. In the latter, just figure out how to remember all the information you acquire and build rules from it to then apply to every new piece of information.

AI is easy, right?