by GWDarcie | Feb 7, 2015 | Robotech
A clinical psychologist, I have been studying people academically, professionally and personally for over 40 years. What I have learned over all that time is how little I know.
You, a human being, are simply the most complex thing in the known universe. Your brain alone can hold that distinction, but treating your brain as separate from the rest of you is misleading. Your brain is an integral and inseparable part of the larger open system that is you, and when all connected components and subsystems are included in the analysis, you are far more complex still than just your brain. Get a whole bunch of people together, all open, interacting systems, and the complexity raises exponentially.
All of that complexity makes us a truly exceptional species. No other species studies itself the way we do. No other species could even aspire to building a working replica of itself. Indeed, given our complexity, the goal of building a human-like android is audacious in the extreme.
We set out on this quest to build replicas of ourselves long before we had any idea how difficult it would actually be. The relentless march of science, providing us with an ever expanding knowledge base, has enlightened us in this regard. The more we know, the more we see of how much we don’t know.
Yet that doesn’t stop us from pushing to expand the boundaries of our knowledge. On the contrary. Science has moved into numerous areas once thought to be beyond its purview, and whole new areas of study are continuously opening up. It is a daunting task just to list all the disciplines that will need to be drawn from in order to create a truly human-like machine.
But why? Why would we want to do that? Surely there are enough humans already, without trying to build more. Surely we have all heard the dire warnings, the disastrous consequences foreseen by generations of science fiction writers. Why even try to build intelligent machines?
Why build machines at all? It turns out we don’t have much choice. We are a tool-making species, and machines are our most powerful tools. We use tools to extend our capabilities. It was our tools that enabled us to carve out a dominant place in the natural world. We build machines to enable us to do things and go places that would otherwise be beyond us.
Early machines gave us physical power, enhancing our strength and speed. More recent machines give us intellectual power, augmenting our memories and calculating abilities. Essentially, we build machines to help us, and being as vulnerable and limited as we are, in a vast, indifferent universe we need all the help we can get.
When we need help, as social animals we instinctively want to turn to each other. Unfortunately, as individuals, we tend to be rather unreliable. Sometimes we are able to get what we need from each other, sometimes not. Machines reduce our reliance on each other.
We build machines to help us, but there are so many things we need help with that we need to surround ourselves with machines. It would require a machine of extraordinary versatility to improve on this, and there are no such machines. In fact, the most versatile and adaptive thing on earth is the human being, so if we want to build something truly helpful, it makes sense to use us as a model.
Are there risks in building such machines? Definitely. Which is why now is a good time to start figuring out how to get it right.
by GWDarcie | Feb 11, 2015 | Robotech
Two kinds of conscious machines have been popular in science fiction; conscious A.I, and conscious robots. Conscious A.I., essentially consciousness-in-a-box, is almost always portrayed as dangerous, as indeed it would be, if such a thing were possible. Conscious robots are generally portrayed as being just like us, and like us, can be either good or bad.
In the vast majority of science fiction stories about conscious A.I., the machine consciousness happens by some mysterious accident, unplanned and unexpected. In stories about robots, machine consciousness is either just accepted as a given, or again just somehow spontaneously happens. Both of these portrayals of machine consciousness are pure fantasy, literary devices needed when human consciousness was still a complete mystery.
In The science behind In Synthient Skin, I will try to explain how machine consciousness is indeed plausible, and why it will never happen by accident. While we certainly don’t yet have all the answers, we now know enough about human consciousness to know that there are specific requirements that must be met before it can emerge.
Even this rudimentary knowledge allows us to debunk many common beliefs, like the one that holds that if you feed enough data about the world into an A.I., at a certain point consciousness will spontaneously appear. The evidence to date concurs with the neuroscience: it doesn’t matter how many yottabytes of data you feed into an A.I., it will still be no more conscious than your laptop.
Science allows us to confidently dismiss another common theme in fiction: the idea that machine consciousness will be created by some mad scientist in his lab. On the contrary, it will require an extraordinary team effort, akin to the Human Brain Project (https://www.humanbrainproject.eu/), a ten year effort, currently underway, involving hundreds of researchers from over twenty countries. Android development will benefit directly from the fruits of this huge effort.
Before we can have an intelligent conversation about machine consciousness, we have to come to some agreement about the definition of terms. The field of A.I. has for decades been rife with semantic confusion, and the same will happen with machine consciousness research if no consensus can be reached. I am just laying out the problem, not claiming to have the answers.
The problem is that there is no agreed upon definition of intelligence, and no consensus on what consciousness even is. Both have been defined in many different ways and subdivided into many different aspects. Many, many passionate arguments have arisen between factions using the same word but unwittingly talking about different things.
To simplify things at the start, I will clarify that when I speak of consciousness, I am speaking of human-like consciousness, consciousness that we can all identify with, the consciousness of our shared experience. For our present purposes, cosmic or universal or animal or any other kind of “consciousness” is not what we are talking about. With regard to machine consciousness, anything other than human-like consciousness would be impossible for us to relate to or communicate with, or to share any kind of kinship with. In a machine, such alien consciousness, were it possible, would be truly dangerous.
No, what we need in conscious machines are ones we can relate to; machines that can in turn relate to us. As I will later show, human-like consciousness requires a human-like body. When we are speaking of machine consciousness, then, we are really talking about sentient androids. In future entries, I will explain why consciousness, in any recognizable form, can not exist in a box, and I will discuss what “sentience” means in the context of robotics.
by GWDarcie | Feb 15, 2015 | Robotech
The idea of intelligent machines has been around at least since the time of Turing’s work in the 1940s. Artificial intelligence (A.I.) has become a mature field, and is now poised on the brink of widespread application across all existing technologies. Despite this, early hopes were never realized, and today’s A.I. is not what it was once expected to be.
It became apparent early on that A.I., while being far superior to humans in doing certain specific things, could not come close to the general breadth of human intelligence. The goal of creating artificial general intelligence, or “strong A.I.,” has proven to be surprisingly difficult.
In my last blog I identified the difficulty in even defining what “intelligence” is. In fact, so many different definitions have been proposed that it is now accepted that there are many distinct facets of what could be called intelligence. ( for a small sampling, see http://en.wikipedia.org/wiki/Human_intelligence )
We can hope to replicate the functions of many of these individual facets of intelligence, as we already do with decision making, numerical analysis and the other current A.I. applications. Even then, the result will be a number is individual A.I.s working in parallel; useful, certainly, but not the strong A.I. that was the original goal. Something will still be missing, and that something is sentience. The “general” nature of human intelligence arises from a foundation of sentience.
Sentience
Sentience, from the Latin “sentiēns” (feeling, perceiving), is the ability to feel, sense or experience perceptions subjectively. Sentience represents a distinct knowledge system, one that is distributed throughout the body, separate from but integrated with brain-based intelligence. Our bodies know things that we may not even be conscious of. They use this knowledge for self-maintenance and repair, for self-protection, and for many other autonomous functions. No conscious involvement is required from us. Arguably, all living creatures have some degree of sentience, as it is essential to our ability to survive and thrive.
Human sentience is complex and nuanced: through consciousness, we experience our own embodiment – we “feel” ourselves – while at the same time perceiving the world around us and our interaction with it. This body-based knowledge operates outside the realm of logic, reason and language, often even outside of our awareness, and so falls outside the traditional scope of A.I.
If we want to have a truly human-like android, we have to go beyond the limitations of machine intelligence and add machine sentience. The distinction between intelligence and sentience is crucial to understand, but in practice, they are interdependent qualities. Human-like sentience will require intelligence, and true general intelligence will require sentience.
Sentience is fundamental to human consciousness.
The foundation of human consciousness, as eloquently described by Antonio Damasio in his classic, The Feeling of What Happens, is the moment to moment orchestra of sensation arising from the biophysical activity of the body. This constant stream of data provides continuous feedback to the body’s maintenance systems, enabling the autonomic responses necessary to maintain homoeostasis, nutrient balances, and waste management. It also provides feedback as to body position and orientation, and critical information about the local environment.
The brain integrates this flow of data into a coherent model of the current bio-state of the body as a whole, and this model is what we experience as our “self.” As we experience the ongoing updating of this model, our conscious experience consists of the perception of changes occurring in the model. We have no awareness of the model per se, but we do perceive changes as they occur, and this ongoing process of change is the content of our present experience.
In order for this process to work, we need sensory and feedback data from critical functional systems, we need to compile the data from all sources into a single, coherent representational model, we need a memory system to make temporal comparisons and note changes as they occur, and we need a meta-level compiler to integrate those changes into patterns of neural stimulation that are the stuff of experience.
This is not the entirety of human consciousness, but only the prerequisite, a base level that Damasio calls “core consciousness.” I will discuss the other levels in future blogs.
Sentience is fundamental to human consciousness and human intelligence. Unless an android is sentient, it can not be conscious in the human sense, no matter how much “intelligence” is packed into it.
In my next blog I will further explain how body-based knowledge lays a critical foundation for brain-based knowledge.
by GWDarcie | Feb 22, 2015 | Robotech
The Mechanism Me, as should be obvious from the title, pays homage to Asimov’s classic I Robot series. It is my respectful attempt to update the venerable “positronic brain” with a robotic system that reflects current trends in science and technology.
Asimov’s prodigious foresight in envisioning a “positronic brain” anticipated the field of computational neuroscience by half a century. Only now can we start to fill in some of the details on how it might actually work.
There is a tremendous amount of work going on today with the goal of building an artificial brain, i.e., The Human Brain Project. A lot of the previous work in this area seems to have overlooked a key understanding coming out of the neurosciences: The human brain is not an isolated organ. Intelligence, sentience and consciousness are all the product of an integrated human knowledge system that includes the whole body. Building an artificial brain will not lead to something that thinks like us. An artificial brain needs to be housed in an artificial body before anything like human intelligence, sentience or consciousness can emerge.
Human-like artificial general intelligence requires the sentience that can only be provided by a body. Developmental neuropsychology tells us that the neurological substrates of logic and reason, primarily located in the cerebral cortext, are not yet functional in the first few years of life. Despite this, infants and toddlers learn at a prodigious rate. If their intellects are not even working yet, how are they learning? They are learning with their entire nervous systems, not just their brains. Bodies have their own separate memory systems. What are they learning? They are learning the foundational knowledge upon which all other knowledge will later be based.
How do we know what we know?
From day one, we learn from experience. Specifically, we learn from feeling our bodies as they experience their environments. Through repeated and ongoing exposure to stimuli our bodies learn associations, that is, what goes with what. Some forms of stimulation are associated with good feelings, some with bad. The parts of our knowledge system that control our bodies, our body-brains, are designed to learn to make movements that produce desired effects (comfort, pleasure) and inhibit those that produce undesired effects (discomfort, pain).
As infants, our limbs randomly move about and encounter things. We discover that some things move when touched and some don’t, some things are graspable and some aren’t. The things that move can be within our reach or move out of it. Through efforts to reach, we learn to distinguish between near and far, the core concept of the near-far dichotomy. As we explore whatever comes within reach, we learn that things have differing qualities, requiring different sorts of interactions. We learn distinctions, the easiest always involving opposite poles of a dichotomy. Some things feel warm and some cold. Soft things are safe against our skin but hard things can hurt. We learn the concept of the soft-hard dichotomy. Some things we can lift or move, others we can’t: The light-heavy dichotomy.
We gradually figure out that through our own muscle movements we can change our orientation to the world around us. We can roll over, push our eyes away from the ground and gain a higher perspective. Higher still and we’re sitting up, but at constant risk of falling down again. Now we know up-down. All of this learning is done by the subcortical regions of our nervous systems, as our cerebral cortex and hippocampal memory systems are not yet functional. In other word, our foundational knowledge is not the conscious knowledge of the reasoning brain, but rather the sensory, emotional and procedural knowledge acquired by the deeper body-brain regions through felt experience.
Through our body’s experience of its environment we learn all the core dichotomies that form the foundation for all the knowledge we will ever possess. Hunger-satiation, pleasure-pain, hot-cold, each dichotomy creates the raw material for analogy. Something can be not physically hot but have a quality that evokes the experience of hot, and we understand what that means. Something with no mass can be described as “heavy” and the description makes sense because we know what heavy feels like. Over time our dichotomies become more sophisticated and complex and form the basis for increasingly abstract analogies. Something new must be like something we already know, grounded in primal sensory experience, before we can make any sense of it. This is the contribution of the body-brain to human knowledge, and without this body-based knowledge, no machine can ever know the world as a human does.
It used to be thought that if a database of useful information about the world was extensive enough, it would enable a computer to understand things. There is no magical threshold of complexity at which consciousness spontaneously arises. We now know that there are specific conditions that must be met. Consciousness is both embodied and relational. Self can only be known from the experience of being-in-the-world provided by a body, and self can only be recognized in relation to other. Similar bodies produce similar experiences, and so create the ground for a common experience of consciousness, a commonality that enables us to know ourselves by reflection and to relate to each other as beings. No matter how extensive the database, all a computer can do is regurgitate the data. It can’t understand the data in the human sense. Artificial Intelligence, as originally conceived, can never acquire consciousness.
Strong A.I. and Sentience
Artificial General intelligence, or strong A.I., will require sentience, which requires a sensor-rich body and a brain. The body is needed to accumulate and feed real-time sensory data to the brain. Attaching sensors to robots is as old a robotics, but human-like sentience requires and enormous amount of sensory data. A lot of good work is going on in the field of machine sensing, at least with the primary senses.
To complete the system, the brain must be able to process sensory data into meaningful perceptions that can be remembered, compared and contrasted; things that A.I. happens to do very well. In order for an android to be intelligent in a way that we can relate to, it must have a fully integrated brain-body system. Like us.
Next: biomimetics.