by GWDarcie | Mar 22, 2015 | Robotech
The more we learn about how we humans work, the more able we are to use biomimetics to leap forward with our technology. Biomimetics is the mimicking of biology. Many of our most promising new technologies borrow from the remarkable solutions arrived at by nature over millions of years of evolution. Examples of biomimetic products include velcro, aerogels, superhydrophobic surfaces, efficient wing shapes and solar cells, ultra-strong materials, and many, many more, with a multitude of new projects going all in labs all over the world, all seeking to imitate some extraordinary feature found in nature.
Robot design will benefit from biomimetics in numerous ways. Androids, by definition, are imitations of the human body plan; that is, humanoid. But the imitation can and will go much deeper. The human skeletal system, with its attached muscles and tendons, differs in significant ways from the typical chassis frame of current androids, and this difference is reflected on the functional level. It turns out that human locomotion is remarkably efficient, and so one approach to making android locomotion more efficient will be to mimic the human gait, which will require a similar musculoskeletal construction.
Nature, through natural selection, arrives at solutions that are sufficient to enable the organism to survive and thrive in the face of whatever challenges are present in the local environment. It is important to note that nature arrives at sufficient solutions, which, while often far better than humans can think up on their own, are not necessarily optimal solutions. Again, a sufficient solution is not an optimal solution. This means that biomimetics, in the hands of competent researchers and engineers, offers an initial leap forward, but not necessarily an end point. Once we figure out how nature accomplishes a task, we may be capable of going on to improve it even further.
Take the human spine, for example. It is a mind boggling structure, from an engineering perspective, breathtaking in its complexity, flexibility, strength and durability. It enables humans to stand erect, for extended periods, carry heavy weights, move and flex and tumble and roll and support the movements of all limbs. And yet it is far from perfect, as evidenced by the number of back problems people experience. As we develop future android chassis, we will have the benefit of using what works well, and trying to improve upon it.
The androids of the near future will not look mechanical, like the robots of today, with steel rods, gears and wires, metal covering plates held on with bolts and screws. All of these industrial age materials will be obsolete within the next 10 – 20 years, replaced by new, much lighter and stronger materials, molded or printed into the desired shapes. Many of there materials are already in development, inspired by biomimetics.
Android chassis will not be the only things to benefit from biomimetics. So will their brains. One specialized area of biomimetics is neuromimetics.
Neuromimetics is the mimicking of the nervous system, including the brain. While this does, of course, requires a solid understanding of how the system works, it also helps us to further refine our models. If our emulations don’t work the way we expect, we can go back to our models to figure out where we’ve gone wrong and try again.
There are actually two branches of neuromimetics; medical and technological. It is technological neuromimetics to which I refer when I use the term, that is, building technology that mimics the workings of the nervous system and brain. The use of neuromimetics is still in its infancy. A current example of this is seen in neuromorphic chips; microprocessors configured to resemble the wiring of the brain, rather than that of traditional circuits. (See The MIT Technology Review – http://www.technologyreview.com/featuredstory/526506/neuromorphic-chips/)
It will be exciting to see where this work leads in the next decades, as its fruits are combined with those of the Human Brain Project (https://www.humanbrainproject.eu/) and other efforts. Computers can be expected to become much more capable and efficient. But will they be able to think?
Let’s explore that question next time.
by GWDarcie | Jan 31, 2016 | Robotech
Here is a riddle. I am invisible and obvious at the same time. I seem so substantial that I occupy most of your waking attention yet I am completely insubstantial. I carry the apparent weight of critical importance while being largely irrelevant. I constantly blind-side you from out of nowhere but am under your control. I am natural yet completely disregard the laws of nature. What am I?
I’m your thoughts, of course. Thinking is so subjectively invisible that few people ever notice they are doing it. When attention is focused on it, it is so subjectively obvious that few stop to wonder about it. So, can a machine think? That depends on how you define the word “think”, which turns out to be a lot harder than you might think.
What is thinking? What does it mean to think?
These are questions only a psychologist could love. Don’t ask me why, but psychologists tend to love hard questions. But I won’t bore you with the long list of cognitive activities that are commonly referred to as thinking. As A.I. research pushes against its limits, researchers are increasingly turning to the cognitive sciences for answers. And the short answer is, no, machines can not think. At least not yet. (Eventually they will.) When they are computing, they perform a limited array of cognitive tasks, and can do so very well. Some can even learn. But they are not thinking in the way that humans think.
Computers run instruction sets composed of binary code, which produce more binary code instruction sets. Human thought consists largely of mental images (cognitive maps and models), which include symbols, words and sounds as well as pictures, all tied to previous experience.
Computers remember by storing bits and recording their location for later access. They are not reminded of things, nor do they try to recall things. Humans have at least three distinct memory systems, each working out of different brain subsystems. We can “think” about the past, but we also remember lots of things without thinking about them, such as how to ride a bike.
Machines are getting pretty good at imitating some of our mental activities. In fact, I consider the Turing Test to be obsolete. There are apparently many computers that can now pass it. They do so with the help of some very good programming that enables them to imitate people quite well. It turns out we are not that hard to imitate in casual conversation. We tend to be quite mechanical with each other, speaking in cliches and well worn scripts. But just because a machine can simulate a conversation does not mean that it can think. The process that generates the words has nothing in common with how we generate speech.
Machines can now perform an impressive array of cognitive tasks. If you want to call what machines do “thinking”, then feel free. But it is a qualitatively different thing from what goes on in the human mind. It is not “human-like” thinking. Strong AI will require that ability to think in a human-like way. As you know from my earlier posts, I believe that sentience is the long-overlooked key to success. If I am right, the first thinking machine will not be a computer. It will be an android. Why? Because sentience is an embodied process. It will be far easier to build sentience into an android than to simulate sentience in a box.
by GWDarcie | Feb 22, 2016 | Robotech
There is a simple reason why Artificial General Intelligence (AGI) has been so illusive. To date, most work on AGI has been based on a misconception: that human intelligence is produced by the brain. Yes, the brain plays a large role, but it does not act alone. It is but one part of a larger functional system: the central nervous system (CNS).
The division between brain and body is a vestige of an obsolete dualism that continues to exert a strong influence in science education. Yet the division is a false one. The confusion persists because the brain is structurally separable (a definable organ) but functionally inseparable from the CNS. The brain depends on input from the body in order to work.
As long as AGI research seeks a functional human-like intelligence, it is a fallacy to consider the brain to be a separable component. The data that contributes to general intelligence is collected and processed by the entire CNS. There are clusters of neurons outside the brain, particularly in the spinal cord, that process information and pass it on up – little mini-brain sub-processing stations, if you like. As I have earlier explained, the sub-cortical body-brain networks produce the foundational knowledge upon which all other knowledge is based.
AI has now reached the point that it can be widely adopted, and in its current form will continue to grow and flourish as a useful tool. The next frontier is AGI or “strong” AI, and this is where further innovation is needed. Human intelligence is conscious. Strong AI, if is to be truly human-like, will also need to be conscious. So when we speak of strong AI, like it or not, we are really speaking of machine consciousness.
However, long before machine consciousness is achieved, we will be able to combine numerous coordinated AIs to produce a good imitation of AGI. It will exhibit “general intelligence,” in that it will be able to perform functions across a range of spheres of intelligence. It will do so by utilizing broad sets of AIs, each dedicated to a specific function. It will be a very useful machine, but it will not seem to have human-like intelligence. It will not have the versatility of the human mind, and we will not be able to identify with it as something like us.
The ‘seat of the intellect’ may be in the brain, but the intellect is only one component of human intelligence. Artificial General Intelligence (AGI) will require including the functional equivalent of the rest of the CNS. It is this sub-cerebral CNS (body-based) processing that generates sentience. For a deeper discussion of the role of sentience in consciousness, please see my earlier posts.
The next step then, is to expand our models of the brain to include the entire nervous system. Does this increase the enormity of the task ahead? You bet it does. But modeling ourselves has been an audacious endeavor from the outset.