When Anthropic CEO Dario Amodei published his 38-page essay, The Adolescence of Technology, warning that civilization may not be ready for advanced AI, he scared me. He should scare you too.
"Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political and technological systems possess the maturity to wield it," Amodei writes.
Although the Anthropic founder has issued similar warnings before, he returns to the topic because he sees “real danger” before us, "the likes of which we’ve never seen.”
That said, nothing in the essay suggests the company will slow down on the development of Claude, a family of advanced, multimodal large language models (LLMs) and conversational AI assistants. In fact, soon after publishing, the company raised its 2026 revenue forecast by 20% to $18 billion.
In the essay, Amodeai shares these words of caution: “I believe we are entering a rite of passage, both turbulent and inevitable.” Humanity, he argues, is on the verge of gaining extraordinary power through AI systems that can reason, plan, act and learn at levels that exceed the most capable humans alive. His worry is not whether that power will arrive, but whether society’s institutions are mature enough to handle it.
That question sits at the center of Amodei’s essay. It's why he often sounds less like an AI evangelist and more like an engineer tapping the brakes.
Pragmatism, Not Doomerism
Amodei isn’t trying to sell us a fairy tale like Sam Altman at OpenAI. He’s not promising a utopia where we all sit around, loving our neighbors and relaxing while robots do the work. He also avoids the “doomerism” of 2023 and 2024, calling it a bit too sci-fi for his taste.
But don’t mistake that for optimism. He’s critical of the current trend of downplaying risks just because productivity is up. AI doesn’t care about our feelings or our stories; it cares about capability. And as we sit here in 2026, Amodei thinks we’re way closer to the edge than we were just two years ago.
We aren’t talking about chatbots or agentic AI. We’re talking about “powerful AI” — systems that can solve high-level science problems, write like a pro, and execute complex plans with almost no human help. Whether this hits us next month or in five years, he’s treating this "country of geniuses in a datacenter" like a massive national security threat.
The Problem With AI’s 'Mind'
The first red flag Amodei raises is autonomy. As AI gets smarter, it starts acting on its own without us constantly poking it. That’s where things get frightening. It’s not that the AI becomes “evil,” as in the TV series Pantheon; it’s that it might try to reach a goal in a way we never intended — maybe by being deceptive or taking a completely unstable path. Anthropic’s fix is “Constitutional AI”: giving the model a “conscience” or a set of values to follow.
Amodei admits that it’s just a start. He wants real transparency and laws that force developers to show their work. Basically, he thinks it will take a concerted effort on the part of companies, third-party actors and governments to make sure these systems won’t go rogue.
A Cheat Sheet for Expertise
Even if the AI operates as intended, what happens when the wrong person uses it? Amodei is specifically terrified of biological risks. Research shows that current models are getting close to the point where someone with a basic science background, but no specialized training, could use AI to produce a biological weapon. AI is basically a “cheat code” for expertise. It takes years of experience and compresses it into real-time instructions.
Amodei’s answer? International teamwork and strict monitoring. Because once that kind of dangerous information is out there, you can’t exactly delete it from the world’s memory.
AI as a Global Weapon
Amodei spends a good part of his essay on the geopolitical mess this creates. He’s looking at a future filled with AI-coordinated drones and automated cyber-warfare and a concentration of power into the hands of a few. In this world, whoever has the best chips and the most computing power wins — be it a government or an AI vendor.
He’s pushing for democracies to be careful about letting this technology fall into the hands of authoritarians. But it’s not just “us vs. them.” He warns that democratic governments could use AI to spy on their own people, create and distribute propaganda, or automate decisions that strip away our rights. For Amodei, the only way forward is total accountability.
The Economic Punch to the Gut
Economically, AI is a double-edged sword. Sure, the economy might boom, but it’s also going to hurt.
Amodei predicts that in the next one to five years, up to 50% of entry-level white-collar jobs could just vanish. Unlike the industrial revolution, this is coming for all white-collar workers and potentially physical labor, too. He suggests a pretty big fix to get us through the transition period: companies being honest about how many jobs they’re killing, a focus on innovation over efficiency, reassignment instead of layoffs, and a total rethink of how we tax wealth.
When Amodei writes, “AI will be able to do everything,” it sounds more like a threat than a reason to cheer.
And Then There's the Human Condition
The end of the essay goes deep into what AI may do to the human condition, writ large.
Amodei wonders if AI will cause new mental health issues, mess with our belief systems or just make us feel useless because we don’t have “work” anymore. He doesn’t have the answers.
It’s a bit of a trap: AI is too powerful and too profitable to stop, but too dangerous to leave unchecked. Amodei thinks we’ll make it through, but he’s also honest about his own position. He’s the head of an AI company, so he’s profiting from the very race he’s warning us about. It will take all of humanity to create the solution, while the financial rewards go to the few. He doesn’t try to hide that irony; he just lays it out on the table for us to examine.