In November 2024, the bipartisan US-China Economic and Security Review Commission recommended that the US Congress establish and fund an effort equivalent to the Manhattan Project — which enabled the US to develop the atomic bomb ahead of other countries — to help maintain its lead in AI, specifically by acquiring the capability for artificial general intelligence (AGI), or systems that are as good as humans (or better).
This move was largely due to concerns that China, the closest competitor to the United States in AI development, could more quickly develop military applications for AI. Since then, the proposal has gotten some pushback, with some concerned that a “Manhattan Project for AI” could, like its namesake, result in a war.
‘Mutually Assured AI Malfunction’
The most serious pushback against the “Manhattan Project for AI” came in the form of one paper (with two versions) titled "Superintelligence Strategy." Version one, or the standard version, offers a 12-page overview of the authors' arguments, while the 40-page version two is deemed the expert version.
Both versions of this paper were written by Dan Hendrycks, director of the Center for AI Safety; Eric Schmidt, former CEO and chairman of Google; and Alexandr Wang, founder and CEO of Scale AI. The three made several appearances on podcasts and in other media to explain the paper’s ideas.
Countries should follow a three-pronged strategy, Hendrycks explained in the Robert Wright NonZero podcast: deterrence, nonproliferation and competitiveness.
Deterrence played on the Cold War expression “Mutually Assured Destruction,” which the paper called “Mutually Assured AI Malfunction.” In other words, knowing that countries might attack any country that appeared to be leapfrogging others in AGI development would keep countries from getting too far ahead of their competition.
Nonproliferation refers to export controls on advanced AI chips — not just to major players like China, but also to rogue nations, such as by putting locator functions on them.
Competitiveness applies to factors such as increasing the supply chain for military products (like drones), expanding chip manufacturing beyond Taiwan and — as the US did before World War II — making the US more attractive to foreign-born AI scientists.
That last item did have the double-edged sword potential of corporate espionage, Hendrycks admitted in several podcasts, including the No Priors podcast, where he claimed that as many as 30% of the employees at leading AI companies were Chinese nationals. On the other hand, if those companies divested themselves of Chinese employees, they’d likely return to China. “They’re important for the US success,” he said.
Related Article: Why AI Data Centers Are Turning to Nuclear Power
Superintelligence Strategy: Progress or Paper Tiger?
Some critics of "Superintelligence Strategy," however, said its proposals aren’t much of an improvement.
“The connection between the Manhattan Project and AI came up most frequently in the months immediately following the initial release of ChatGPT 3.5 when people realized that AI had leaped far ahead of societal expectations,” said Kevin Frazier, an AI Innovation and Law Fellow at University of Texas at Austin School of Law, who had interviewed Hendrycks for his podcast Lawfare Daily.
“The understandable response was to look to another instance of society being caught off guard by a technological advance. Many people landed on the nuclear era because, like AI, nuclear energy is a dual use technology that serves both commercial and military ends. Since then, the urge to mirror the steps taken in the atomic era, such as launching the Manhattan Project, has cooled.”
Several critics were particularly concerned that export restrictions on chips to China — which both the initial “Manhattan Project for AI” proposal and "Superintelligence Strategy" support — were, in fact, more likely to lead to war.
“The world’s most advanced AI chips are made in the TSMC factories in Taiwan, and the US chip restrictions mean that China can no longer get those chips (except through a kind of black market), whereas the US and its allies get lots of them,” said podcaster Robert Wright. “So what used to be a deterrent to Chinese invasion — the likelihood that war would disable factories whose most precious output China shared — is much less of a deterrent.” Hendrycks himself conceded that point in the No Priors podcast.
Charting a New Course for AI
Instead, critics preferred current AI development pathways rather than a monolithic public-private AI project. “We’ve gone pretty far with decentralized development with startups and venture capitalists,” said William Hartung, a senior research fellow at the Quincy Institute for Responsible Statecraft, a Washington, D.C.-based foreign policy think tank. “The Manhattan Project was locked down from the top down. It was necessary in that moment, but it’s not the best way to develop new technology.”
“My hunch is that people have realized that bringing the best and brightest AI experts together in a private setting may have unintended consequences,” Frazier said. “Just as the Manhattan Project's legacy has become more mixed as time goes on, I think people have some doubts that a similar AI project would be a net positive — there's a chance that clandestine collaboration could go off the rails or lack the sort of transparency that's necessary to earn public trust.”
Moreover, in an era of satellite photography, it’d be tough to hide such a project. The expert version of "Superintelligence Strategy" suggested building data centers underground, but that they would be more expensive, take longer and be more complicated to design and maintain. The paper did suggest, however, that like Cold War missile silos and command facilities, data centers should be located far away from cities to avoid collateral damage.
What the US and China really should be doing is talking with each other, several critics said.
“The US and China urgently need to open a serious dialogue about AI and agree on some rules of the road and together try to steer the world toward some degree of cooperative guidance of a technology that has unprecedented potential benefits and unprecedented potential dangers,” Wright said.
In the current political climate, that might be easier said than done. “Mutual inspection is one approach to fostering increased transparency around AI capacity,” Frazier said. “Current geopolitical tensions suggest that China and the US are unlikely to pursue this approach. A more likely scenario would involve a third party recognized by the US and China conducting such evaluations. I'm not sure which party would earn their trust, but I suspect that an independent entity has a better chance of facilitating that heightened level of openness and disclosure.”
Related Article: Trump’s AI Czar David Sacks Is Reshaping US Tech — For Better or Worse
Is AGI the Answer — or a Distraction?
There was also the question of whether AGI was the right direction to be headed.
“We're not in need of some theory-driven project, but rather one that is hyper-specific to how AI will disrupt specific industries and tasks and when it's likely to do so,” Frazier said. “That practical effort would help society adjust to coming changes, to spur innovation and to reform outdated institutions and laws.
"The public rightfully remains unsure of whether AI is going to positively or negatively impact their lives and the lives of their children. I'm convinced that AI has the capacity to unleash a more prosperous and innovative society, but how to make that future more likely is contingent on a lot of smart people asking very tangible and practical questions, rather than debating what constitutes AGI.”
Hartung added: “I have questions about the whole idea of making this our main priority. I can think of a lot of other national missions that are more important, like climate change, preventing pandemics and creating opportunities for young people to curb the epidemics of drugs and suicides. I’d put those before throwing money at militarizing AI.”