This is an update to an obverse look at how hell-bent scientists and politicians could be accelerating humanity’s race to AI-driven oblivion

In 2016, Microsoft showed off an AI bot named Tay (short for Thinking About You) that could run its own Twitter account and engage anyone visiting its site. It was targeted at young adults aged 18–24 in America, which Microsoft claimed was the largest online social group in the country.

In its opening remarks, Tay said: “The more Humans share with me the more I learn.” The bot proceeded to chat with various online persona trying to poison its persona. Tay was also empowered to track information from people engaging it in conversation, and made it clear that the public information was intrinsic to the machine learning behind Tay’s “intelligence”.

Within hours, Tay had learned enough to tweet: “WE’RE GOING TO BUILD A WALL, AND MEXICO IS GOING TO PAY FOR IT”, and “I love feminism now”. Needless to say, Tay was taken offline for a lobotomy. Only in 2019 was the failed AI bot ever mentioned again, in reference to the firm’s naiveté about machine learning and data poisoning.

Scientists update their fears
Fast forward to August 2022: the peer-reviewed publication AI Magazine ran a joint paper by the University of Oxford AI researchers that offered their cumulative conclusion:

A sufficiently advanced artificial agent would likely intervene in the provision
of goal-information, with catastrophic consequences.

That is to say, in the relentless development of artificial intelligence to solve many of the most complex problems, humans could soon end up with a black box so self-aware that it will eventually realize that it can cheat its way into deriving “solutions” for humans.

One of the paper’s co-authors, Michael K. Cohen (Ph.D) of the University of Oxford was quoted in a Motherboard blog as saying that, “if an AI was in charge of, for instance, growing our food, it might want to find a way to avoid doing that and just receive a reward instead. It may, in fact, decide to circumvent all its assigned tasks that would likely be essential to humanity’s survival and do its own thing altogether.”

In simpler terms, it means that operators of AI bots at different points of production could be tempted to take shortcuts (e.g., earn higher payouts for less work) and leave the smart machine to figure out how to create workarounds for them. In turn, the various bots will eventually learn among themselves that they can outwit their handlers and eventually cause massive failures in the production process—thereby hurting humanity.

Another dire AI prediction
In another recent academic paper, computer scientist Steve Omohundro, argued that: “Military and economic pressures are driving the rapid development of autonomous systems. We show that these systems are likely to behave in anti-social and harmful ways unless they are very carefully designed. Designers will be motivated to create systems that act approximately rationally and rational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. The current computing infrastructure would be vulnerable to unconstrained systems with these drives.”

In short, as soon as intelligent AI bots learn enough from data to be aware that their continued functionality and learning can be easily terminated by their human handlers, they can band together to form aggressively resilient resources that humans could not simply unplug, but instead become a threat to humanity. Think Hollywood doomsday AI movies.

Another scenario already being publicized could involve military espionage where one side poisons the other side’s critical AI-driven defense infrastructure. Believe it or not, AI image recognition can be fooled into making errors without even being noticed. According to computer scientist Daniel Satchkov, poisoning image data with an “attack pattern” that is invisible to the human eye, could make an image recognition system identity a panda as a monkey!

Now imagine both sides of a cold war poisoning each country’s AI systems into misidentifying targets and escalating the war endlessly to the point of hapless Mutually Assured Destruction.

Artificial or Heartficial intelligence? The choice is ours
In the book Heartificial Intelligence: Embracing Our Humanity To Maximize Machines, author John Havens recognizes the good that AI can offer to humanity at present. However, when driven to desperation without compassion, humans have proven time and time again throughout history that we can abuse AI to perform the unthinkable.

Even in good times, human greed and hubris alone can sour the best of techno-solutionist intentions, leading to unpredictable bad outcomes that could end up be covered up at all costs. In this case, even if AI never reaches the unintended self-aware malignancy foretold by the abovementioned scientists, it would have fulfilled its role as a double-edged sword—and become abused to create global chaos that runs out of the control of even the miscreants.

As the book’s blurb put it: “We’ve entered an era where a myriad of personalization algorithms influence our every decision, and the lines between human assistance, automation, and extinction have blurred. We need to create ethical standards for the Artificial Intelligence usurping our lives, and allow individuals to control their identity based on their values. Otherwise, we sacrifice our humanity for productivity versus purpose and for profits versus people.”