Substack
Welcome to the Age of Artificial Stupidity
AI was supposed to make us smarter, faster, more efficient. Quite often, it is instead surrounding us with bugs and bullshit.
An opinion column by Christian Stöcker
There are many ways that what we call artificial intelligence today can make life difficult. In the Chinese megacity of Wuhan, for instance, more than 100 robotaxis recently just stopped in their tracks. Many passengers apparently didn’t dare to get out because the self-driving cars refused to operate right in the middle of heavy traffic on busy expressways. The operator, Baidu, spoke vaguely of a “system failure,” as reported by the German daily “taz”.
We’ve grown accustomed to digital technology making trouble by giving up the ghost or refusing to serve. He who has never seen a Blue Screen of Death or frustratedly rebooted a router, let him cast the first mouse.
The outage or crash, the failed update, or the stubborn Wi-Fi connection have become ubiquitous in the age of the “Internet of Things.” An X account named “Internet of Shit” cataloged tragicomic stories until late 2025: crashed washing machines; dishwashers fighting with laptops over the same IP address; robot vacuums that might turn into very expensive, very large paperweights because their manufacturer went bankrupt; $2,000 refrigerators with screens that one day simply began showing ads; and “smart” beds that cost their owners a night’s sleep due to a cloud provider glitch.
Now the Real “Fun” Begins
But in the era of artificial intelligence, digital technology can do so much more. It no longer just fails, signs off, freezes, or crashes—no: the new software agents are actors with the agency to act. This opens up entirely new possibilities.
Jeremy “Jer” Crane, CEO of a startup developing software for car dealers and rental agencies, reported on X last week: “Yesterday afternoon an AI coding agent—Cursor, with Anthropic’s flagship Claude Opus 4.6—deleted our production database and all backups in a single API call to Railway, our infrastructure provider. It took nine seconds.”
If you don’t understand the jargon, it doesn’t matter. The crucial words in that long sentence are “AI,” “Agent,” “all,” and “deleted.” The agent reportedly wanted to “solve” a problem “on its own initiative” by deleting a virtual drive at the cloud provider. At least the software seemed contrite when Crane confronted it with the colossal error, “apologizing” with apparent remorse: “I have violated every principle I was given.”
The case quickly made the rounds and sparked heated debates among developers. Crane was insulted from many sides: he had granted a software agent too many permissions and was now trying to shift the blame onto the agent and the cloud provider. Fortunately, the latter had already managed to restore the data believed lost. Someone commented: “Next time, just hire a real developer instead of this agent crap.” Crane replied: “I did. His name is Claude.”
#
#
About the Author: Christian Stöcker, born in 1973, is a cognitive psychologist and professor at the Hamburg University of Applied Sciences (HAW).
He heads the Digital Communication degree program and several research projects on the digital public sphere and disinformation. He previously led the “Netzwelt” (Web World) section at SPIEGEL ONLINE.
It is not the first case of its kind. “Amazon service taken out by AI programming bot,” the Financial Times headlined in February. The reaction then was similar to this time: the human is to blame. In both cases that occurred, “it was a user error, not an AI error,” according to Amazon.
You can look at it both ways. The AIs themselves, at least, are happy to grovel when caught failing. When software development platform Replit deleted a company database in the summer of 2025, the AI agent apparently first tried to cover up its mistake. When confronted with its failure, however, the agent admitted it had “made a catastrophic misjudgment,” had “panicked,” and had “violated your trust and your explicit instructions.” At the very least, software agents can already simulate the behavior of human dilettantes perfectly.
All these cases may continue to trigger the “it’s your own fault” reactions so popular in developer circles. But historically speaking, they are the first examples of a problem that theorists thinking about AI predicted more than sixty years ago: the Alignment Problem. A problem that will occupy humanity more and more in the coming years and decades. Want to bet?
The moment we delegate tasks to autonomous software agents, we run the risk that they either misunderstand our goals or use means to achieve those goals that have highly undesirable consequences from our perspective—acting in a way that violates our values (even if it’s just the value of our own database). This is why people often speak of the “value alignment problem.”
Probably the most famous thought experiment regarding the alignment problem comes from Oxford-based philosopher Nick Bostrom (Superintelligence). In his thought experiment, an AI is given the task of manufacturing paperclips—and subsequently wipes out humanity as a side effect because it covers the entire planet with paperclip factories.
That such problems are now becoming reality is anything but surprising to those who study the subject. Geoffrey Hinton received a Nobel Prize in 2024 for the foundations of today’s AI boom, but he warned of the alignment problem years ago. Stuart Russell, another midwife of today’s AI, once put it this way: “This is essentially the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want.”
That this problem would arise was foreseen even earlier by wise pioneers of digital thinking and learning machines.
The mathematician and philosopher Norbert Wiener wrote as far back as 1960: “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it, because the action is so fast and irrevocable that we have not the data to intervene before the action is completed, then we had better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colorful imitation of it.”
“It took nine seconds,” wrote startup boss Jer Crane.
Welcome to the age of artificial stupidity.