Can machines think?

Here is a riddle. I am invisible and obvious at the same time. I seem so substantial that I occupy most of your waking attention yet I am completely insubstantial. I carry the apparent weight of critical importance while being largely irrelevant. I constantly blind-side you from out of nowhere but am under your control. I am natural yet completely disregard the laws of nature. What am I?

I’m your thoughts, of course. Thinking is so subjectively invisible that few people ever notice they are doing it. When attention is focused on it, it is so subjectively obvious that few stop to wonder about it. So, can a machine think? That depends on how you define the word “think”, which turns out to be a lot harder than you might think.

What is thinking? What does it mean to think?

These are questions only a psychologist could love. Don’t ask me why, but psychologists tend to love hard questions. But I won’t bore you with the long list of cognitive activities that are commonly referred to as thinking. As A.I. research pushes against its limits, researchers are increasingly turning to the cognitive sciences for answers. And the short answer is, no, machines can not think. At least not yet. (Eventually they will.) When they are computing, they perform a limited array of cognitive tasks, and can do so very well. Some can even learn. But they are not thinking in the way that humans think.

Computers run instruction sets composed of binary code, which produce more binary code instruction sets. Human thought consists largely of mental images (cognitive maps and models), which include symbols, words and sounds as well as pictures, all tied to previous experience.

Computers remember by storing bits and recording their location for later access. They are not reminded of things, nor do they try to recall things. Humans have at least three distinct memory systems, each working out of different brain subsystems. We can “think” about the past, but we also remember lots of things without thinking about them, such as how to ride a bike.

Machines are getting pretty good at imitating some of our mental activities. In fact, I consider the Turing Test to be obsolete. There are apparently many computers that can now pass it. They do so with the help of some very good programming that enables them to imitate people quite well. It turns out we are not that hard to imitate in casual conversation. We tend to be quite mechanical with each other, speaking in cliches and well worn scripts. But just because a machine can simulate a conversation does not mean that it can think. The process that generates the words has nothing in common with how we generate speech.

Machines can now perform an impressive array of cognitive tasks. If you want to call what machines do “thinking”, then feel free. But it is a qualitatively different thing from what goes on in the human mind. It is not “human-like” thinking. Strong AI will require that ability to think in a human-like way. As you know from my earlier posts, I believe that sentience is the long-overlooked key to success. If I am right, the first thinking machine will not be a computer. It will be an android. Why? Because sentience is an embodied process. It will be far easier to build sentience into an android than to simulate sentience in a box.

Why Artificial General Intelligence (AGI) has been so illusive

There is a simple reason why Artificial General Intelligence (AGI) has been so illusive. To date, most work on AGI has been based on a misconception: that human intelligence is produced by the brain. Yes, the brain plays a large role, but it does not act alone. It is but one part of a larger functional system: the central nervous system (CNS).

The division between brain and body is a vestige of an obsolete dualism that continues to exert a strong influence in science education. Yet the division is a false one. The confusion persists because the brain is structurally separable (a definable organ) but functionally inseparable from the CNS. The brain depends on input from the body in order to work.

As long as AGI research seeks a functional human-like intelligence, it is a fallacy to consider the brain to be a separable component. The data that contributes to general intelligence is collected and processed by the entire CNS. There are clusters of neurons outside the brain, particularly in the spinal cord, that process information and pass it on up – little mini-brain sub-processing stations, if you like. As I have earlier explained, the sub-cortical body-brain networks produce the foundational knowledge upon which all other knowledge is based.

AI has now reached the point that it can be widely adopted, and in its current form will continue to grow and flourish as a useful tool. The next frontier is AGI or “strong” AI, and this is where further innovation is needed. Human intelligence is conscious. Strong AI, if is to be truly human-like, will also need to be conscious. So when we speak of strong AI, like it or not, we are really speaking of machine consciousness.

However, long before machine consciousness is achieved, we will be able to combine numerous coordinated AIs to produce a good imitation of AGI. It will exhibit “general intelligence,” in that it will be able to perform functions across a range of spheres of intelligence. It will do so by utilizing broad sets of AIs, each dedicated to a specific function. It will be a very useful machine, but it will not seem to have human-like intelligence. It will not have the versatility of the human mind, and we will not be able to identify with it as something like us.

The ‘seat of the intellect’ may be in the brain, but the intellect is only one component of human intelligence. Artificial General Intelligence (AGI) will require including the functional equivalent of the rest of the CNS. It is this sub-cerebral CNS (body-based) processing that generates sentience. For a deeper discussion of the role of sentience in consciousness, please see my earlier posts.

The next step then, is to expand our models of the brain to include the entire nervous system. Does this increase the enormity of the task ahead? You bet it does. But modeling ourselves has been an audacious endeavor from the outset.