Archives for : January 2016

Can machines think?

Here is a riddle. I am invisible and obvious at the same time. I seem so substantial that I occupy most of your waking attention yet I am completely insubstantial. I carry the apparent weight of critical importance while being largely irrelevant. I constantly blind-side you from out of nowhere but am under your control. I am natural yet completely disregard the laws of nature. What am I?

I’m your thoughts, of course. Thinking is so subjectively invisible that few people ever notice they are doing it. When attention is focused on it, it is so subjectively obvious that few stop to wonder about it. So, can a machine think? That depends on how you define the word “think”, which turns out to be a lot harder than you might think.

What is thinking? What does it mean to think?

These are questions only a psychologist could love. Don’t ask me why, but psychologists tend to love hard questions. But I won’t bore you with the long list of cognitive activities that are commonly referred to as thinking. As A.I. research pushes against its limits, researchers are increasingly turning to the cognitive sciences for answers. And the short answer is, no, machines can not think. At least not yet. (Eventually they will.) When they are computing, they perform a limited array of cognitive tasks, and can do so very well. Some can even learn. But they are not thinking in the way that humans think.

Computers run instruction sets composed of binary code, which produce more binary code instruction sets. Human thought consists largely of mental images (cognitive maps and models), which include symbols, words and sounds as well as pictures, all tied to previous experience.

Computers remember by storing bits and recording their location for later access. They are not reminded of things, nor do they try to recall things. Humans have at least three distinct memory systems, each working out of different brain subsystems. We can “think” about the past, but we also remember lots of things without thinking about them, such as how to ride a bike.

Machines are getting pretty good at imitating some of our mental activities. In fact, I consider the Turing Test to be obsolete. There are apparently many computers that can now pass it. They do so with the help of some very good programming that enables them to imitate people quite well. It turns out we are not that hard to imitate in casual conversation. We tend to be quite mechanical with each other, speaking in cliches and well worn scripts. But just because a machine can simulate a conversation does not mean that it can think. The process that generates the words has nothing in common with how we generate speech.

Machines can now perform an impressive array of cognitive tasks. If you want to call what machines do “thinking”, then feel free. But it is a qualitatively different thing from what goes on in the human mind. It is not “human-like” thinking. Strong AI will require that ability to think in a human-like way. As you know from my earlier posts, I believe that sentience is the long-overlooked key to success. If I am right, the first thinking machine will not be a computer. It will be an android. Why? Because sentience is an embodied process. It will be far easier to build sentience into an android than to simulate sentience in a box.