How could we distinguish conscious AI from its zombie equivalent?

Retrieved from Being You: A New Science of Consciousness by Anil Seth with permission from Dutton, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2021 by Anil Seth.

In Prague, at the end of the 16th century, Rabbi Judah Loew ben Bezalel took clay from the banks of the Vltava River and from this clay he molded a human figure – a golem. This golem – which was called Josef, or Yoselle – was created to defend the rabbi’s people from anti-Semitic pogroms, and it apparently did so very effectively. Once activated by a magical spell, golems like Josef could move, were aware, and obeyed. But with Josef, something has gone terribly wrong and his behavior has changed from a submissive obedience to a violent monster. Eventually the rabbi was able to reverse his spell, causing his golem to fall apart on the synagogue grounds. Some say his remains lie hidden in Prague to this day, perhaps in a cemetery, perhaps in an attic, perhaps patiently waiting to be reactivated.

Rabbi Loew’s golem reminds us of the arrogance we invite when we attempt to fashion intelligent, sentient creatures—creatures in our own image or from the mind of God. That’s rarely good. From Mary Shelley’s creature Frankenstein to Ava in that of Alex Garland Ex carthrough the homonymous robots of Karel Čapek, those of James Cameron TerminatorRidley Scott’s Replicants Blade Runnerand Stanley Kubrick’s HAL, these creations almost always turn against their creators, leaving behind trails of destruction, melancholy and philosophical confusion.

Over the last decade or so, the rapid rise of artificial intelligence has lent a new urgency to questions about machine consciousness. Artificial intelligence is now all around us, built into our phones, our refrigerators and our cars, powered in many cases by neural network algorithms inspired by the architecture of the brain. We are rightly concerned about the impact of this new technology. Will he take our jobs away? Will it dismantle the fabric of our societies? Will it eventually destroy us all, whether through its nascent self-interest or a lack of programmatic foresight that causes the Earth’s entire resources to turn into a vast pile of paper clips? Underlying many of these concerns, especially the more existential and apocalyptic ones, is the assumption that AI will, at some point in its accelerated development, become conscious. This is the myth of the golem made of silicon.

What would it take for a machine to be conscious? What would be the implications? And how, indeed, could we even distinguish a conscious machine from its zombie equivalent?

Why might we even think that a machine – an artificial intelligence – can become aware? As I just mentioned, it’s quite common, though not universal, to think that consciousness will emerge naturally once machines cross a yet unknown threshold of intelligence. But what drives this intuition? I think two key assumptions are responsible, and neither are justifiable. The first assumption concerns the conditions necessary for anything to be conscious. The second concerns what is sufficient for a specific thing to be conscious.

The first assumption — the necessary condition — is functionalism. Functionalism claims that consciousness does not depend on what a system is made of, whether it is wetware or hardware, neurons or logic gates of silicon or clay from the Vltava River. Functionalism claims that what matters to consciousness is what a system does. If a system transforms inputs into outputs in the right way, there will be consciousness. There are two separate statements here. The first concerns independence from any particular substrate or material, while the second concerns the sufficiency of input-output relationships. Most of the time they go together but sometimes they can get separated.

Functionalism is a popular view among philosophers of mind and is often accepted as the default position even by many non-philosophers. But that doesn’t mean it’s correct. For me, there are no overwhelming arguments either for or against the position that consciousness is substrate-independent, or that it is just a matter of input-output relationships, of “information processing.” My attitude towards functionalism is suspiciously agnostic.

How could we even tell a conscious machine from its zombie equivalent?

For artificially intelligent computers to become conscious, functionalism would have to be true. This is the necessary condition. But functionalism being true is not, by itself, sufficient: information processing alone is not sufficient for consciousness. The second assumption is that the kind of information processing that is sufficient for consciousness is also the kind that sustains intelligence. This is the assumption that consciousness and intelligence are intimately, even constitutively, related: that consciousness will only come along for the ride.

But even this hypothesis is poorly supported. The tendency to confuse consciousness with intelligence dates back to a pernicious anthropocentrism with which we over-interpret the world through the distorting lens of our own values ​​and experiences. We are conscious, we are intelligent, and we are so proud of our self-proclaimed intelligence that we assume that intelligence is inextricably linked to our conscious state and vice versa.

While intelligence offers a rich menu of branching conscious states for conscious organisms, it is a mistake to assume that intelligence – at least in advanced forms – is necessary or sufficient for consciousness. If we persist in assuming that consciousness is intrinsically linked to intelligence, we may be too eager to attribute consciousness to artificial systems that appear to be intelligent, and too quick to deny it to other systems – such as other animals – that fail to match the our questionable human standards of cognitive competence.

#distinguish #conscious #zombie #equivalent

Leave a Comment