The Myth of Sentient Machines: Why Digital Computers Can’t Have Consciousness

A-human-like-robot-Shutterstock_raw story image

This article was originally published at Psychology Today and posted to the Reddit philosophy board, where it generated over 1000 comments. See the discussion here.

Some of today’s top techies and scientists are very publicly expressing their concerns over apocalyptic scenarios that are likely to arise as a result of machines with motives. Among the fearful are intellectual heavyweights like Stephen Hawking, Elon Musk, and Bill Gates, who all believe that advances in the field of machine learning will soon yield self-aware A.I.s that seek to destroy us—or perhaps just apathetically dispose of us, much like scum getting obliterated by a windshield wiper. In fact, Dr. Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race.”

Indeed, there is little doubt that future A.I. will be capable of doing significant damage. For example, it is conceivable that robots could be programmed to function as tremendously dangerous autonomous weapons unlike any seen before. Additionally, it is easy to imagine an unconstrained software application that spreads throughout the Internet, severely mucking up our most efficient and relied upon medium for global exchange.

But these scenarios are categorically different from ones in which machines decide to turn on us, defeat us, make us their slaves, or exterminate us. In this regard, we are unquestionably safe. On a sadder note, we are just as unlikely to someday have robots that decide to befriend us or show us love without being specifically prompted by instructions to do so.

This is because such intentional behavior from an A.I. would undoubtedly require a mind, as intentionality can only arise when something possesses its own beliefs, desires, and motivations. The type of A.I. that includes these features is known amongst the scientific community as “Strong Artificial Intelligence”. Strong A.I., by definition, should possess the full range of human cognitive abilities. This includes self-awareness, sentience, and consciousness, as these are all features of human cognition.

On the other hand, “Weak Artificial Intelligence” refers to non-sentient A.I. The Weak A.I. Hypothesis states that our robots—which run on digital computer programs—can have no conscious states, no mind, no subjective awareness, and no agency. Such A.I. cannot experience the world qualitatively, and although they may exhibit seemingly intelligent behavior, it is forever limited by the lack of a mind.

A failure to recognize the importance of this strong/weak distinction could be contributing to Hawking and Musk’s existential worries, both of whom believe that we are already well on a path toward developing Strong A.I. (a.k.a. Artificial General Intelligence). To them it is not a matter of “if”, but “when”.

But the fact of the matter is that all current A.I. is fundamentally Weak A.I., and this is reflected by today’s computers’ total absence of any intentional behavior whatsoever. Although there are some very complex and relatively convincing robots out there that appear to be alive, upon closer examination they all reveal themselves to be as motiveless as the common pocket calculator.

This is because brains and computers work very differently. Both compute, but only one understands—and there are some very compelling reasons to believe that this is not going to change. It appears that there is a more technical obstacle that stands in the way of Strong A.I. ever becoming a reality.

Turing Machines Aren’t Thinking Machines

All digital computers are binary systems. This means that they store and process information exclusively in terms of two states, which are represented by different symbols—in this case 1s and 0s. It is an interesting fact of nature that binary digits can be used to represent most things; like numbers, letters, colors, shapes, images, and even audio with near perfect accuracy.

This two-symbol system is the foundational principle that all of digital computing is based upon. Everything a computer does involves manipulating two symbols in some way. As such, they can be thought of as a practical type of Turing machine—an abstract, hypothetical machine that computes by manipulating symbols.

A Turing machine’s operations are said to be “syntactical”, meaning they only recognize symbols and not the meaning of those symbols—i.e., their semantics. Even the word “recognize” is misleading because it implies a subjective experience, so perhaps it is better to simply say that computers are sensitive to symbols, whereas the brain is capable of semantic understanding.

It does not matter how fast the computer is, how much memory it has, or how complex and high-level the programming language. The Jeopardy and Chess playing champs Watson and Deep Blue fundamentally work the same as your microwave. Put simply, a strict symbol-processing machine can never be a symbol-understanding machine. The influential philosopher John Searle has cleverly depicted this fact by analogy in his famous and highly controversial “Chinese Room Argument”, which has been convincing minds that “syntax is not sufficient for semantics” since it was published in 1980. And although some esoteric rebuttals have been put forth (the most common being the “Systems Reply”), none successfully bridge the gap between syntax and semantics. But even if one is not fully convinced based on the Chinese Room Argument alone, it does not change the fact that Turing machines are symbol manipulating machines and not thinking machines, a position taken by the great physicist Richard Feynman over a decade earlier.

Feynman described the computer as “A glorified, high-class, very fast but stupid filing system,” managed by an infinitely stupid file clerk (the central processing unit) who blindly follows instructions (the software program). Here the clerk has no concept of anything—not even single letters or numbers. In a famous lecture on computer heuristics, Feynman expressed his grave doubts regarding the possibility of truly intelligent machines, stating that, “Nobody knows what we do or how to define a series of steps which correspond to something abstract like thinking.”

These points present very compelling reasons to believe that we may never achieve Strong A.I., i.e., truly intelligent artificial agents. Perhaps even the most accurate of brain simulations will not yield minds, nor will software programs produce consciousness. It just might not be in the cards for a strict binary processor. There is nothing about processing symbols or computation that generates subjective experience or psychological phenomena like qualitative sensations.

Upon hearing this, one might be inclined to ask, “If a computer can’t be conscious, then how can a brain?” After all, it is a purely physical object that works according to physical law. It even uses electrical activity to process information, just like a computer. Yet somehow we experience the world subjectively—from a first person perspective where inner, qualitative and ineffable sensations occur that are only accessible to us. Take for example the way it feels when you see a pretty girl, drink a beer, step on a nail, or hear a moody orchestra.

The truth is, scientists are still trying to figure all this out. ­How physical phenomena, like biochemical and electrical processes, create sensation and unified experience is known as the “Hard Problem of Consciousness”, and is widely recognized by neuroscientists and philosophers. Even neuroscientist and popular author Sam Harris—who shares Musk’s robot-rebellion concerns—acknowledges the hard problem when stating that whether a machine could be conscious is “an open question”. Unfortunately he doesn’t seem to fully realize that for machines to pose an existential threat arising from their own self-interests, conscious is required.

Yet although the problem of consciousness is admittedly hard, there is no reason to believe that it is not solvable by science. So what kind of progress have we made so far?

Consciousness Is A Biological Phenomenon

Much like a computer, neurons communicate with one another through exchanging electrical signals in a binary fashion. Either a neuron fires or it doesn’t, and this is how neural computations are carried out. But unlike digital computers, brains contain a host of analogue cellular and molecular processes, biochemical reactions, electrostatic forces, global synchronized neuron firing at specific frequencies, and unique structural and functional connections with countless feedback loops.

Even if a computer could accurately create a digital representation of all these features, which in itself involves many serious obstacles, a simulation of a brain is still not a physical brain. There is a fundamental difference between the simulation of a physical process and the physical process itself. This may seem like a moot point to many machine learning researchers, but when considered at length it appears anything but trivial.

Simulation Does Not Equal Duplication

The Weak A.I. hypothesis says that computers can only simulate the brain, and according to some like John Searle—who coined the terms Strong and Weak A.I.—a simulation of a conscious system is very different from the real thing. In other words, the hardware of the “machine” matters, and mere digital representations of biological mechanisms have no power to cause anything to happen in the real world.

Let’s consider another biological phenomenon, like photosynthesis.Photosynthesis refers to the process by which plants convert light into energy. This process requires specific biochemical reactions only viable given a material that has specific molecular and atomic properties. A perfect computer simulation—an emulation—of photosynthesis will never be able to convert light into energy no matter how accurate, and no matter what type of hardware you provide the computer with. However, there are in fact artificial photosynthesis machines. These machines do not merely simulate the physical mechanisms underlying photosynthesis in plants, but instead duplicate, the biochemical and electrochemical forces using photoelectrochemical cells that do photocatalytic water splitting.

In a similar way, a simulation of water isn’t going to possess the quality of ‘wetness’, which is a product of a very specific molecular formation of hydrogen and oxygen atoms held together by electrochemical bonds. Liquidity emerges as a physical state that is qualitatively different from that expressed by either molecule alone.

Even the hot new consciousness theory from neuroscience, Integrated Information Theory, makes very clear that a perfectly accurate computer simulation of a brain would not have consciousness like a real brain, just as a simulation of a black hole won’t cause your computer and room to implode. Neuroscientists Giulio Tononi and Christof Koch, who established the theory, do not mince words on the subject:

“IIT implies that digital computers, even if their behaviour were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing.”

With this in mind, we can still speculate about whether non-biological machines that support consciousness can exist, but we must realize that these machines may need to duplicate the essential electrochemical processes (whatever those may be) that are occurring in the brain during conscious states. If this turns out to be possible without organic materials—which have unique molecular and atomic properties—it would presumably require more than Turing machines, which are purely syntactic processors (symbol manipulators), and digital simulations, which may lack the necessary physical mechanisms.

The best approach to achieving Strong A.I. requires finding out how the brain what it does first, and machine learning researchers’ biggest mistake is to think they can take a shortcut around it. As scientists and humans, we must be optimistic about what we can accomplishment. At the same time, we must not be overly confident in ways that steer us in wrong directions and blind us from making real progress.

The Myth of Strong A.I.

Since as early as the 1960s, A.I. researchers have been claiming that Strong A.I. is just around the corner. But despite monumental increases in computer memory, speed, and processing power, we are no closer than before. So for now, just like the brainy sci-fi films of the past that depict apocalyptic A.I. scenarios, truly intelligent robots with inner conscious experience remain a fanciful fantasy.

Advertisements

5 thoughts on “The Myth of Sentient Machines: Why Digital Computers Can’t Have Consciousness

  1. Look carefully at people, do we all perceive reality the same way, do we interpret reality the same way? The simple answer is no.
    How is that possible?
    From the argumentation that you report we should all be “just human” in a standardised and predictable manner. So if you cannot understand people, why should you believe you understand possible future machines in such an absolute manner?
    I feel someone expressing his beliefs relative to how he perceives his reality, and beliefs are not science.

    Like

    • Sure, I can prove it. We both have the same squishy things inside our skulls. You know that you are conscious. It’s about the only thing you can be sure of. We know that we can disrupt your conscious experience by lesioning certain areas of your brain, or disrupting how it integrates information using anesthetics. I can prove that I am also conscious, and not just a sophisticated robot, or a “p-zombie” as philosophers call it, because if you open my skull up I also have a brain that functions in much the same way. If our AIs showed even a hint of intentionality or volition, we might wonder whether they have subjective experience. But they do not. Leave Watson, the IBM supercomputer Jeopardy champ running for a hundred years with no input, and it will not do a thing.

      I want to be clear about one thing. I am not saying that we cannot create conscious machines. We are examples of conscious machines. I’m simply saying this machine will require more than symbol manipulation. Turing machines aren’t sufficient for consciousness, and by consciousness I’m specifically referring to subjective, qualitative, first-person, inner experience.

      Like

      • I am afraid there’s no objective proof in such an argument. Even the “feeing” of being conscious could be attributed to a set of biochemical mechanisms as the one deployed out of million years of evolutionary struggle that gave birth to the dualistic pain/reward system. Being able to shut down such circuits giving us the “feeling” of totality of “Self” do not really constitute any empirical evidence in the absence of any quantitative, empirically measurable quantity representing such totality. In that sense, the “feeling” of being conscious remains as elusive as in the case of so called “Qualia”. On the other hand, there was a reason for proclaiming a priority of volitional attributes in the presentation shown, in that it would stand for a perfectly measurable property of any system, even a lifeform from another planet. Moreover, the emphasis on antagonistic environments where certain agents struggle for expanding vital space and enforce a planning, was necessary for quantitative estimation of purposeful vs random choices which, to my opinion is the only truly objective and thus scientifically ponderable question, the rest being sort of metaphysics. It was also prioritized to show the inherently relative nature of any “free-will” questioning solely in terms of strategic planning. One cannot though dismiss the possibility that if absolute reductionism was correct, any such notion of relative freedom could be doomed in a universe of which the primary causes still remain iron clad. No such answer of course is as yet possible in the absence of any true Theory-of-Everything, if at all possible. Ramifications aside, I would like to call in attention the very simple fact that we ourselves, we are as much “constructs”, engineered by Darwinian evolution as any other and, in that sense it still seems to me a rational argument that there could be a certain thershold of complexity allowing an internal “feeling” of totality by other types of agents/aliens or whatever, even if made by different materials. One can just imagine for instance the difficulty of a cosmonaut trying to estimate the degree of consciousness in some newly found naturally shaped enormous crystal structure of such vast complexity that it really makes up an eternal dreamer out of eternally circulating internal currents. You could still measure all the currents but you would never be able to really understand or make any sense of those dreams!

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s