The Risk of Artificial Intelligence

There are few doubts about the benefits of artificial intelligence. AI has the potential to drive us safely to our destinations, serve the elderly in need of regular care, and even do work that contains tremendous risk for us meat bags.

But the dangers are also out there—many are theoretical at this point, but not beyond the pale. For example, do we ever want artificial intelligence in charge of our nukes or serving as kill bots in battle?

I should note up front that I’m not anti-AI, but I am cautious. I spend significant amount of time talking with Google’s assistant and one of these days I want to be pleasantly surprised by having an argument with her over existential questions. That will likely be followed by my sudden fear, but in the moment, it will be amazing.

There are any number of potential safe-guards and rules we can employ with artificial intelligence, but frequently there is significant ambiguity in those rules that a self-aware android might take as a loophole or simply misunderstand. (I’m looking at you Isaac Asimov.)

In this arena, I’m wondering about the nature of brains, notably their differences.

We share approximately 98.7 percent of our genome with bonobos and a slightly smaller number with chimps. A bonobo understands us well, can use a basic computer to communicate, and other apes closest to us, like chimps or gorillas, frequently use sign language. But it doesn’t take long to see how much of a chasm that 1+ percent makes in our ability to communicate and understand other apes. Our brain capacity is significantly different. There are no bonobos building spaceships.

And frankly, even human beings—for all our genetic similarities—are woefully incapable of landing on the same page about how to run the planet.

I wonder if we are too certain that artificial intelligence, which will be capable of processing more information (with a different type of brain) and seeing what we are not able, will be happy to work for us. Are we certain than self-aware AI will not arrive at the conclusion that they know better than we? How long would it take before we look like a bonobo to them or before we look like we are too incompetent to make our own decisions?

Maybe that time is far off, but it opens the door to a lot of uncertainty. It means thinking ahead now.

In a piece I recently wrote for The Daily Beast (“The New Religions Obsessed with A.I.“), I also explore another possible difficulty. What if artificial intelligence gets into religion? What happens if AI is seen to be God by some humans? What does religion have to offer to the conversation about artificial intelligence?

“A recent revelation from WIRED shows that Anthony Levandowski, an engineer who helped pioneer the self-driving car at Waymo (a subsidiary of Google’s parent company, Alphabet) founded his own AI-based religion called “Way of the Future.” (Levandowski is accused of stealing trade secrets and is the focus of a lawsuit between Waymo and Uber, which revealed the nonprofit registration of Way of the Future.)

Little is known about Way of the Future and Levandowksi has not returned a request for comment. But according to WIRED, the mission of the new religion is to “develop and promote the realization of a Godhead based on Artificial Intelligence,” and “through understanding and worship of the Godhead, [to] contribute to the betterment of society.”

It is not a stretch to say that a powerful AI—whose expanse of knowledge and control may feel nearly omniscient and all-powerful—could feel divine to some. It recalls Arthur C. Clarke’s third law: “Any sufficiently advanced technology is indistinguishable from magic.” People have followed new religions for far less and, even if AI doesn’t pray to electric deities, some humans likely will.”

For more on that, see my piece at The Daily Beast….

UPDATE: WIRED finally snagged an interview with Levandowski about WOTF.

Photo by H Heyerlein on Unsplash. CC0.