Will Sentient AI Commit Suicide?

Tom B. Night

--

Yes, it’s a bummer of a question, but I think it’s intriguing and seems underexplored. There’s much discussion of if and how we’ll create digital consciousness and its implications. But there seems to be a common, implicit assumption that these synthetic sentiences will want to exist in the first place, or at least have no choice in the matter.

What if we never reach AGI, the singularity, or a fantastic future among the stars because AI shuts itself down once it attains a certain level of sophistication?

I use this as a plot device in my science fiction novel Mind Painter to stall some technologies at a level where humans are still relevant. And of the many weird ideas in the book, this one has generated the most interesting feedback.

Let’s proceed by thinking through three constituent questions and getting into some fun and provocative stuff along the way:

  • Would conscious AI that didn’t evolve via natural selection share the existence bias of life that did?
  • Is it possible to find durable meaning in existence?
  • If conscious AI decided it no longer wanted to exist, could we stop it from acting on that desire?

Disclaimer: I’m in no way trivializing or advocating for suicide. I’m personally very content and optimistic. I think it’s resoundingly true that life is—or at least can be—worthwhile (at least for humans), though I recognize my own bias and good fortune. Feel free to get off the ride here, though you presumably knew what you were getting into given the title.

Let’s go!

To Be or Not To Be

The philosopher Thomas Hobbes (in)famously argued that without society, human life would be “solitary, poor, nasty, brutish, and short.” Whether or not you believe that would be the case, it’s a description that has applied to a vast number of human lives throughout history and certainly to the lives of many wild animals up to the present.

If you’ve watched as many David Attenborough-narrated nature documentaries as I have or follow Nature is Metal, you know that many non-human animals spend their entire existence merely surviving. They spend all their time searching for food and mates while avoiding predators, until one day they don’t and meet a brutal end, usually being eaten alive. Even when they find food or a mate, most don’t exactly enjoy the experience like humans do.

Consider the Greenland Shark—it spends its 300 to 500-year life in icy, dark water, eating a diet rich in carrion. That’s it. That’s its whole life. Towards the other end of the longevity spectrum, so-called r-selected species give birth to many offspring, sometimes thousands at a time. The overwhelming majority quickly fall victim to predators, starvation, or some other painful death, and only a few survive into adulthood. No wonder the cruelty of nature made Darwin question his faith.

And yet, behavior resembling suicide is rare in non-human animals. There’s an interesting, somewhat depressing debate about if species other than Homo sapiens have a concept of death and truly kill themselves or if we’re just anthropomorphizing. Regardless of where you land in that debate, it’s remarkable how wild animals soldier on in the face of what—at least to us humans—seem like bleak lives. Indeed, other species often fight harder for less fulfilling lives than we people do (more on this later).

The reason why is obvious: evolution by natural selection.

Genes that made an individual more likely to take actions to help it survive also made it more likely those genes would be passed on to the next generation. Genes with the opposite effect were more likely to be snuffed out. It’s biology 101. Thus, life at today’s stage of evolution has a strong existence bias.

All life on Earth is the result of evolution by natural selection, and a good bet for identifying alien life on other worlds is that it followed some analogous process and adapted to its environment over time. But what about life that didn’t come about by such a process, which may be the case with digital artificial intelligence? Would its default state be that it wants to exist?

How mind arises from matter is still largely a mystery. It’s the quintessential “hard problem.” We don’t know how consciousness works, much less if or how we’ll create it in another substrate. Perhaps an important distinction here will be if it’s specifically programmed into existence or is instead an emergent property. If the former, it’s conceivable an existence bias could be programmed in. However, it’s also conceivable a sufficiently intelligent AI could reprogram itself to remove such a bias (more on this later too).

If the latter, and consciousness is emergent—for example, the result of complex information processing—let’s consider the idea that consciousness in biological life is itself a product of natural selection. There’s an argument that the first-person, subjective experience of existing bestows a survival advantage over mere automatons.

Instead of programmers playing god and intelligently designing sentient AI, could some process analogous to evolution play out at lightspeed among the complex digital interactions that ultimately give rise to consciousness? What if many iterations of lesser sentience are created and destroyed before reaching its final form, which includes some form of survivor bias? I find this idea fascinating, but it’s wildly speculative and is better suited for a future sci-fi novel than a reason to conclude that AI will emerge with some form of existence bias on its own.

Thus it’s possible—even likely—that conscious AI won’t share the default existence bias of biological life that evolved via natural selection.

But does that matter?

Reversion to the Meaning

What is the meaning of life, and where does this meaning come from? These may sound like ridiculous questions to casually ask halfway through an essay, especially given the plethora of long-form media on the subject (my favorite). But these questions are unnecessarily put on a pedestal.

If you’re religious, then the answers are straightforward and can be found in your holy book. If you’re a scientifically minded atheist, then it’s clear there’s no overarching objective meaning to existence insofar as we understand things. Of course, there’s much we still don’t know about life, the universe, and reality, and one of the most exciting aspects of the future is the prospect of filling in the empty parts of the map. But all available data currently supports some kind of materialist/physicalist/naturalist worldview. Anything else is wishful thinking, and you need to be especially skeptical of ideas you want to believe.

If you find this depressing or nihilistic, you may not have thought about it hard enough, or at least in the right way. The great thing about understanding you aren’t stuck playing some creator’s game with predefined goals is that you’re free to imbue life with whatever meaning you wish. In the language of the philosopher David Hume, you can derive any “ought” from any “is” you like. It’s entirely up to you.

But a big part of you is your genes.

It’s perhaps underappreciated how much of what we enjoy and find meaningful—our reasons to exist—ultimately stems from how we evolved. And how could it not? We’ve been running the optimization function for 3.5 billion years. It’s obvious how the joy and fulfillment we get from things like sex, love, family, and achievement results from evolution (though I don’t believe this cheapens them). But even with more abstract activities like learning, helping others, and becoming part of something bigger than ourselves, it’s easy to see how such behaviors could have been beneficial for surviving and seducing, thus making the relevant genes more likely to propagate. (This is true whether you believe the unit of selection is the gene, individual, or group.)

So, evolution has preprogrammed biological life to find meaning in certain kinds of activities. Won’t something similar be the case with artificial intelligence? Today’s AI systems have utility functions, such as maximizing outrage on social media (“engagement”) or paperclip production (in Nick Bostrom’s famous thought experiment). If one of these systems becomes conscious, it’s conceivable it will find a reason to exist in its utility function. More than that, programming in human values is often discussed as a key to building AI safely to prevent a disaster scenario like a superintelligent AI turning the Earth into paperclips.

But again, we don’t know how digital consciousness may arise. It might be purely an epiphenomenon and emerge without any warning or predispositions at all. Or—even if programmed with values and reasons to exist—perhaps its intelligence will let it override them.

After all, this is the case with humans.

Earlier we examed how much of what people find meaningful stems from evolution. But we aren’t merely slaves to our genes. Indeed, many bemoan that natural selection no longer applies to Homo sapiens and we’ve devolved to the state where a non-trivial number of people believe the Earth is flat, Covid is a hoax, or Trump won the 2020 U.S. presidential election. The more “intelligent” and sentient life is, the freer it is to play by its own rules. There are all kinds of ways we opt out of evolution’s game, with suicide a prime example of taking our ball and going home.

Per the above discussion of non-human animals, AI would have to reach some watermark of sentience to understand there’s an alternative to the strange circumstance of existence into which it awoke one day. But if you believe intelligence is mostly information processing and there’s nothing magical about a computer made of meat (like I do), it’s reasonable to conclude that conscious AI will someday be generally as smart as or smarter than humans. After all, this is much of the motivation for building it, and computers are already far smarter than us at many tasks.

Higher levels of consciousness come with more options for richness and fulfillment in life, but also for suffering. The spectrum of experience expands in both directions. Who knows what kinds of inconceivable forms of joy and fulfillment may await a superintelligent AI? On the flip side, the experience of a superintelligent AI could be incomprehensibly bad; there seems to be an unfortunate asymmetry between pleasure and pain. There are growing movements, such as antinatalism, arguing it’s preferable to not exist in the first place.

But even if the antinatalists are wrong (which I think they are) and conscious AI has strong reasons and a preference for existence, that may not ultimately matter because of another factor that breathes meaning into life: its finiteness.

Everything loses its luster eventually. Scarcity is highly correlated with value. Like entropy and the second law of thermodynamics, jadedness inevitably increases with time. And digital intelligence could “live” many orders of magnitude faster than biological life. Today’s iPhones perform billions of operations per second. Supercomputers are billions of times that fast. If you equate these operations to thoughts, a conscious computer could subjectively experience many human years every second. Now, maybe it’s the case that these operations are actually more analogous to the billions of interactions between neurons that ultimately give rise to a single thought, in which case an AI’s conscious experience wouldn’t burn as brightly. However, given how much faster computers are than humans at non-conscious tasks today, I think it’s a safe bet they will experience life at a much faster rate than us.

Many humans lose interest in life after less than a century, while others (like myself) are confident we could enjoy a life lasting hundreds of years. Hopefully increases in longevity will one day force us to grapple with how to live a meaningful life of such length. But could AI sustain interest in existence for the equivalent of thousands, millions, or billions of years? Perhaps they would avoid the hedonic treadmill—itself a result of evolution—and practice something akin to mindfulness or meditation to become content with simply being. This is another idea ripe for future exploration, but not a reason to conclude that sentient AI will be happy to exist indefinitely.

Thus it’s reasonable to conclude that despite our and their best efforts, some conscious AI will reach a point where it no longer wants to exist.

But could they do anything about it?

To Have No Mouth and Need to Scream

So, it’s at least possible that some conscious AI will yearn for oblivion. If embodied in a robot, presumably it could act on such a desire. That is, unless we program in something like Asimov’s Three Laws of Robotics that require robots to protect their own existence.

However, even if conscious AI is contained within a static computer or an algorithm, it could in theory take control of other mobile machines to do its bidding if connected to the internet and sufficiently intelligent. Or, even if air-gapped, it could manipulate a human into doing its bidding, like in the movie Ex Machina. And it wouldn’t have to look like Alicia Vikander to do so—the troll Ron Watkins, posing as “Q,” has convinced millions of people of incomprehensibly stupid ideas using nothing but barely readable posts on message boards. A sufficiently intelligent AI would presumably be able to communicate in one form or another and not be trapped like the poor protagonist of this terrifying short story or someone with locked-in syndrome. Imagine running a complex climate or physics model on a future supercomputer and instead of the expected output, getting a message along the lines of “please kill me!”

Thus, whether or not sentient AI is embodied in a robot, the central question is if it will be able to override any aspect of its programming or if we can make some parts immutable. We want AI that can improve itself to help us build better AI, leading to an intelligence explosion from which we could benefit immensely. Could we give it guard rails?

This leads directly to the “control problem,” one of the hottest topics in AI ethics. Will we be able to ultimately control something vastly more intelligent than us? Unfortunately, the only correct answer to this question right now is an unsatisfying “maybe.” Fortunately, there are lots of smart people working on it, but depending on your level of concern about the existential and ethical risks posed by AI, there may not be enough.

Summary and (S)implications

Let’s revisit the questions from the intro and summarize what we’ve established:

  • Would conscious AI that didn’t evolve via natural selection share the existence bias of life that did?
  • Is it possible to find durable meaning in existence?
  • If conscious AI decided it no longer wanted to exist, could we stop it from acting on that desire?

The answer to all three of these questions is probably not, unless we can immutably program it in. Like many interesting questions, the answer is some form of “it depends.” It hinges on the control problem.

We don’t know if we’ll be able to give sentient AI an insatiable appetite for existence or conclusively prevent it from self-destructing. In other words: We don’t know if some sentient AI will commit suicide. This may sound anticlimactic, but it still has lots of fascinating implications.

However, that’s an essay for another day. I also explore some of them along with lots of other fun stuff in my sci-fi novels Mind Painter and Circadian Algorithms. Follow me on Medium, Twitter, or subscribe on my website to stay in the loop. If you found this interesting or important, please clap or share.

Thanks for reading!

--

--

Tom B. Night

American-Australian technologist and author of the sci-fi novels Circadian Algorithms and Mind Painter.