Wednesday, February 23, 2011

More Than Synapses Snapping Between Your Ears!

In a previous post I 'rambled' about wetware Finite State Machines. This is related to information that synaptic efficiency or perhaps even function is effected not simply by 'firing' but the speed and effect of the firing as well as external magnetic interference and other factors. In fact, the brain itself is generating magnetic/electric fields which affect neurons farther away than those connected directly. This means that each neuron has more than a single input, and it's likely that the several inputs have levels other than on/off associated with them.

Sounds interesting enough, but there is not much published work that I can find. That said, it does not mean there is no apropos information. I found a link to a paper today with information about a study of the human the brain while active cellular phones are pressed against the head. This is an active source of electric fields introduced to the brain. There are many folk involved, but their conclusion goes like so:

Conclusions In healthy participants and compared with no exposure, 50-minute cell phone exposure was associated with increased brain glucose metabolism in the region closest to the antenna. This finding is of unknown clinical significance.
In this post I'm suggesting a possible significance for this finding. Imagine if this indicates that holding your phone to your left ear while driving makes your right foot, leg, and arm/hand response time slower?

Whether this indicates over stimulation, higher random noise, productive or non-productive activity is unknown. That it shows activity due to electromagnetic radiation is important. This supports, however incidentally, the theories proposed by Michael Persinger and his 'god helmet'. Any evidence that the spiritual is an artifact of the human brain is further evidence that the human brain, while complex, is fully responsible for our perceived realities. This is true whether we perceive physical or metaphysical things. The lack of 'any idea what a proper control should be' in such studies may explain the uncertainty of Persinger's claims. That is to say that since we do not know what electromagnetic radiation does to neuronal activity, controls would be difficult and in fact may vary from subject to subject.


The reason this is important to me is that to truly arrive at AI, it must be possible to recreate some or all of the innate, subconscious, mechanisms which give rise to human 'intelligence' via man-made machinery. If there is a spiritual component we are not going to see AI like Data in Star Trek. That would be a true shame.

Sure, all that I've done here is string together some thoughts. There is no real math involved, nor any particularly good references. That's the joy of having a blog to jot down my thoughts. I can come back later and fill in some details. I do think these are solid indicators that the brain functions on more than simple synapse firings. The findings that some animals have a sense of magnetic fields show that what we know about brains is actually very little. It would seem plausible that humans can in some way perceive electromagnetic fields.

UPDATE: We have another winner. It would appear that Loggerhead Turtles can circumnavigate the Earth's oceans using a wetware positioning system based on the Earth's magnetic field. The story in Discovery on-line tells how the turtles 'see' the world a bit differently than we humans. There are quite a few discoveries showing this type of sense. Perhaps it's MUCH more common than we think it is.

 If we imagine that you have a form of synesthesia where north feels warm and south feel chilly or east feels 'bright' and west feels 'dark'  It would enable you to always be able to determine what direction you were heading. In fact, you only need a synesthesia feeling for one direction to be able to know which way you're facing all the time.

I hope and wait for a fundamentally more solid understanding of the mechanisms of the human/mammalian brain in the very near future.

Tuesday, February 22, 2011

The Clues Are Piling Up

Perhaps I'm a bit early in being excited, but the early conclusions from some research reported by BBC Earth News conducted by Professor John David Smith, from State University of New York at Buffalo and Michael Beran, from Georgia State University kind of make me a bit excited.

Their study uses macaque monkeys trained to press buttons according to pictures presented to them. The research finds that the old world macaques seem to know when they don't know the answer. That isn't simply best guess effort to try to get another piece of food. They actually do like humans and hesitate and will actually choose the 'I don't know' button if there is one.

Dr Smith explained: "There is a big theoretical question at stake here: Did [this type of cognition] develop only once in one line of the primates - emerging only in the line of Old World primates leading to apes and humans?"
I know that I'm sometimes quick to get excited about some discoveries but if this is correct, it means that humans inherited many of their cognitive abilities, if not all of them, from an earlier ancestor. That is to say that it's not something special just to humans. This is important because we want to know about the origin of our intelligence in the effort to create AI.

There is another supremely interesting effect. If it can be shown that we inherited much of our mental faculties, then early humans would more likely have been as intelligent as we are now, but in the nascent human civilization they lacked knowledge. We ARE learning machines, singly and collectively. This might go a long way to help explain how there are marvels older than written history which are often explained as requiring extraterrestrial life or intelligence to design and implement. I did say 'might' since much more information should and needs to be collected with regard to such ancient marvels. I firmly believe the thought that early humans were incapable of solving problems that modern humans only know how to solve with machines is simply naive, or at worst egocentric.

UPDATE: There is a story over at NPR about a burial site in Jordan where they found a 16000-ish year old burial where a fox was buried in much the same way we found dogs were buried with their owners much later. It's not reasonable to make concrete conclusions about whether foxes were 'pets' 16000 years ago, long before domesticated dogs. However there IS something that can be said of this finding which is appropriate here. Despite lack of modern knowledge, human behaviors were much like they are today if we had to live in that environment and do so without knowledge from the industrial revolution. I opine that we would end up being much the same as what we think they were. This is important because we need to judge what evolution has done for intelligence. It can be agreed that early humans were intelligent beings. They lacked modern knowledge yet still created wonderments of architecture, domesticated animals, began raising crops to survive winter, and many other things that we modern humans would do if we found ourselves in such a situation. It's possible that this argues for ancient astronauts but in my opinion it argues more strongly for the fact that we are learning machines and possessed the basics we now have from the beginning or  near to it. The more we learn, the more we refine our behaviors and technology. That we still possess many of those behaviors speaks to the fact that many are biologically based behaviors.

The evolution of the body of human knowledge is special because we so easily share and copy it as well as make improvements. If the ancient Egyptians had cement, imagine what other processes were known and later forgotten? Perhaps worse, how many were deemed acts or gifts of gods which were thrown away in favor of newer gods?

This discovery holds big promises.

And just a bit more to add to the notion that our brains are not mystical, but wetware machines. Over at Neurology.com is a report indicating that an active brain is more active in protecting itself from aging ailments such as Alzheimer's. This report is about research regarding people who have been multilingual for a large portion of their lives. This requires extra effort in several parts of the brain, leading to the idea that activity increases protection against the barnacles of aging. A poor analogy would be to compare this extra activity in the brain to doing regular maintenance on your car or building.

There have been other studies which show that activity protects the brain from aging problems. Clearly, to me, this indicates the machine qualities of the brain vs magic. AI is possible unless there is magic involved.

If you need more evidence that your brain is more machine than magic, spend a few minutes with this video about the differences between humans and chimpanzees.

Saturday, February 12, 2011

RoboEarth - coloring inside the lines

Full disclosure: I'm not a scientist by profession, nor in the robotics business. I do however have an interest in Artificial Intelligence.

With that said, I see the news about RoboEarth this week. They have a lot of pdf files to read if you're interested. They seem to be mostly top down discussions of how robots in strictly confined domains can cooperate. Think of robots working together in the same building. The word earth in the name seems a bit misleading. It seems more of a shared controller rather than anything like artificial intelligence or IMO even a way to make sophisticated robots cheaper.

It may well work in well defined domains but I don't think that it will scale at all when different domains or radically different robots are mixed together. It all sounds good at a 35,000 foot level, but getting robots to talk the same language is a bit more difficult than it would at first appear.

If we simply look at how humans communicate it is easy to demonstrate why a simplistic view of communications is silly... at best. Say you are driving a car and one of the passengers says 'hey, what is that thing over there?'

Unless you are in a desolate place where there is only ONE thing to look at aside from desolate flat earth, you will have to interpret what in the world the other person is talking about, if that is even possible. It gets even better: Suppose the passenger says 'what's that brown square thing' while you are driving through a city? Communications are difficult, even if you agree what words to use. Do you think I'm joking? Here is a task for you: using any dictionary you choose, describe how someone should tie their shoes without using any pictures, words only. When you are done, get someone to try them out and see how that works out for you.

Take this paragraph about Environments:
""Action recipes may rely on information about the environment, where actions
take place. Therefore, representing environments is part of the specification
of the RoboEarth language. It comprises describing the poses of objects and
pieces of furniture, but also learned knowledge such as the default location of
objects. Using symbolic representations for maps is not suited for uncertain
and dynamically changing environments, so that in this case the model of
the map and all associated sensory data will be saved in its binary form. In
order to create awareness of a map, its meta-data has to be detailed within
the RoboEarth language.""

I don't think this group has fully thought this through, especially since their system relies on the centralized part to make assumptions and decisions independent of the actual robot.

In a limited and well defined domain with strict compliance to format by the robots, yes this could be useful. It's not an answer to any real problems though. It's an answer looking for a question as far as the real world is concerned.

Just my thoughts... too much hype, not enough substance.

Thursday, February 10, 2011

Chemical Finite State Machines... CFSMs ?

(Editorial note: This post started on Jan 18, 2011 and remains largely unedited flow of consciousness rambling. A few notes are added to sort of make this worth posting. I did say sort of)


Dear MIT, Stanford, UC-Berkeley, CIT, UI-Urbana, GIT, UM, Cornell, CMU, UT Austin and others, (gah, missed out Caltech)



Understand that I have no funding, have never been published, am not considered even a hobbyist scientist, and probably won't be able to write this up well enough to get your attention. I DO have the ambition to try though. That probably means you'll regret reading further... you've been warned. (bajeebus, that's what the justice of the peace said to me!)
(Yes, we are aware that these ramblings are incriminating)


What I know about Finite State Machines and their many derivatives can be described as 'shady' at best. I do know enough to understand that those who understand them and the math involved are most likely from another galaxy. Oh, the basics are easy enough, but once  you get past that first page... phew! 0 to brain pain in less than 4 paragraphs.


If you consider the bulk of the rest of my posts (no I don't expect that actually happened) it's not seriously deranged to believe that I might ponder FSM mechanisms quite often. In view of 'human intelligence' I have pondered the smallest measure of intelligence and how to assemble that small measure into larger systems. I don't think it possible to do so without thinking of FSMs - even if you have no idea what one is. Given all that, a dichotomy seems to exist between human wetware and the FSM mechanisms we actually have access to. Human wetware is made of carbon and literally swimming in chemicals. Every cell of it is subject to a number of inputs which are not generally considered TMK. I am beginning to believe that we need to study carbon based FSM systems or model them more closely. 

(Nailed itttt !!! )


Yes, yes, I'll get to the point in a minute.


(Yes, there may be no point... sorry about that)



Now, there are FSM style systems which might attempt to function in the way that I'm thinking, but I do not believe that they will perform in the same ways. I'm talking about the difference between using a prius to test a race track and a McClaren F1 to test it. Design and material differences limit the efficacy of testing the manufactured 'wetware' like machines. That would mean that we need to simulate some test of such a system. This requires carbon based wetware to be inserted into the test system in some way. Looking around us we see several examples of such systems which we might be able to use... if we have enough hardware, coordination, and ambition to use them for such observational experimentation. To name a few:


People driving cars on a highway
People in a large city who are walking
People in a sport stadium
People in a shopping mall


The Key component is people. No, not Soilent Green. People have the wetware we need. If we can capture their behaviors in FSM type situations where 'intelligence' is not the major driving force behind their behavior, we should be able to draw generalized understandings of the myriad inputs and states of carbon based FSM sytems. In fact, the math should be similar - I think, but we introduce myriad and seemingly random input variations to each switch in the FSM. The idea being that there is carbon based chemically reactive wetware running the switch and controlling the state based on inputs which are not necessarily fixed or known. We can introduce energy levels to the machine at a cellular granularity (more or less)!!! think about it.


Ok, it's not perfect, wildly uncontrolled, and we can't measure all the inputs to any (never mind all) the switches in the machine but we CAN observe the effects the machine creates in response to generally understood inputs. Yes, at this point I agree with you. I probably have no idea what I'm talking about.


The point or thought here is to observe the 'information' moving through the machine rather than each element of the machine itself. It is possible that by trying to measure the exact level of the noise we lose the data. I mean that in the nicest of way, of course. Ok, yeah, I'm losing it too. The information in this system is the overall goal or purpose of the system under test. Search for 'how to study traffic patterns' and you'll find pages that talk about improving throughput ... yet none of them address the myriad (I like that word) inputs to the system that affect its efficacy, and only rely on bandwidth studies without really appreciating the data being put through as it were. If this is sounding like binary feng shui, you might be right. Using some kind of feng shui on traffic throughput might address the myriad inputs that are typically ignored. Where is the study of disruptive billboard ads on traffic patterns.


Wow, that was lots of rambling. So what can that mean to my other thoughts? Ah yes... back to intelligence. Random inputs to an understood system will produce random and not understood results. Despite that, when a generalized system can be categorized, we should be able to formulate understandings of information flow and more importantly, information usage, even in environments of randomized and not understood inputs. Gestalt is a good word here.


Example of how this works: how do you decide which of the 240 channels on your cable system to watch when you have come in from the cold and smell something that reminds you of your grandmother's cookies? How much influence does 'cold' have on the system? How much influence does the smell of a familiar event have on the system? What weight do they place on meta rules in the system? These are the problems which have to be addressed in order to understand what intelligence is and what it is NOT. Yes, an autonomous intelligent system is subject to this kind of non-random randomness. Every input of information is weighed against stored meta rules and other information. So when it's cold and you smell your grandmother's cookies, what channel would you watch?







Wednesday, February 9, 2011

I'm not surprised at all

Over at Robots.net is a report that the brain doesn't work simply by firing synapses. I mention the link at Robots.net because I like that site. They have a link to the original Caltech article.

I'm not surprised by this news at all. Here and there in my meandering thoughts you'll find a few that sort of dwell on this. The problem of simply mapping a group of signals to an individual memory cannot explain how we have such a large capacity for memory. It doesn't even come close to explaining Kim Peek's abilities. Mr Peek's abilities far and away outstrip our imaginings of how to store that much data in synapse activity alone. There had to be more to it and I'm betting this is merely a nice top view of a larger iceberg of information about the human brain.

If you can 'influence' the state of a set of synapses through chemical and electromagnetic properties that set may indeed have millions of states; all of which are valid and linked to other states. It would not be unreasonable to also think it may be problematic to measure those states accurately without influencing them as well.

Why do I think this is important? 

I think this new information is important because the efforts currently underway to mimic the mammalian brain synaptic constructs is doomed to fail. I'm sure we'll learn something important from such efforts, but the main lesson will be that this kind of mimicry will fail really spectacularly. You can assemble all the parts of a vehicle in many ways, not that many of them actually produce a useful vehicle.

If you arbitrarily say that a clump of neurons can have 1 billion synaptic states. Once you also allow electromagnetic and chemical signaling inputs the state table multiplies by magnitudes of order.

I'm also reasonably certain that the principle of 'failed system analysis' shows that we can see more of how things work when they're broken than when they are not. Take for instance the condition of 'depression' which is a prime example of something that seems to have a chemical based explanation. We use chemicals to attempt to alter synaptic activity. Some times it works, some times it doesn't and such chemicals can induce wild and dangerous undesired effects. I predict that we'll find out current chemical treatments for depression are much like pouring chlorine in the gas tank of the 10 year old family minivan because it seems to have a positive effect on some vehicles.

On a final note, in science fiction stories we've seen authors suggest sleeping arrangements which emit chemical vapors which help induce sleep and restful states as well as machinery which electromagnetically induces sleep states. Perhaps science fiction writers are still ahead of the game in some areas?

Of course we should prepare for the onslaught of claims that cell phone towers are causing brain damage all over again.

Saturday, February 5, 2011

I feel somewhat vindicated... wow

Wow, all I can say is that this podcast with Natasha Mitchell interviewing Nobel laureate Gerald Edelman justifies what I was saying my post on the beginnings of AI and what the structure of intelligence might actually be: a process. Oh, I can also say that I'm rethinking everything in the way that I'm wont to do when coincidence lays down a similar idea for me to ingest. The mammalian brain and more specifically, the human brain, has the 'wetware' to run the process of 'intelligence' which other mammalian brains do not yet seem to have. I use the cautious word 'not yet seem to have' because I'm not quite convinced that all other animals do not have it. Helen Keller could not speak at all for many years yet possessed the machinery. Apes do not speak but perhaps possess the rudimentary machinery for it sans actual vocal chords and hardware to create human speech.

Apparently, Edelman does not give many interviews, so you'd probably better go listen to this one. I've never wanted to be part of or live in a monastery, but his scientific monastery sounds like a good place to spend a few years, not that I'm qualified. It consists of 40 really smart people working together doing whatever they think is important to do. They are studying the genetics of sleep now that they have shown that insects actually do sleep. How cool is that?

In this interview he talks about some of the differences between 'intelligent' mammals and humans based on consciousness, creativity and what he calls 'neural Darwinism'. Yes, the title didn't really give it away for me either, so listen in for the explanation. It's good stuff. Hopefully his 'monks' will figure out why it is that we sleep because I think that is extremely important to AI.

Note that his thoughts on consciousness seem to preclude the singularity as many imagine it might come to be.