On July 16, 1945, the world crossed a threshold it could never return from. The first atomic bomb was tested, and nothing was ever the same. Today, a similar race is unfolding around Artificial General Intelligence (AGI). While the US accelerates, experts warn: this is no Manhattan Project. An uncontrolled superintelligence could be the single greatest risk to humanity’s future.
How we approach AGI will shape the next century. The question is not just how fast we can go, but whether we should.
That morning in New Mexico, as the sun rose over the desert, history split open. The Trinity test marked the dawn of the atomic age. J. Robert Oppenheimer, leader of the Manhattan Project, would later recall a line from the Bhagavad Gita:
“Now I have become Death, the destroyer of worlds.”
Today, the US stands at another such inflection point. This time the target is not a bomb, but the most powerful form of AI ever imagined: Artificial General Intelligence. Unlike narrow AI systems, AGI refers to machines capable of performing any intellectual task a human can. Think of a system that can write poetry, diagnose illness, and make complex political decisions, all at once.
So, is the AGI race another Manhattan Project moment? Or is this speed a dangerous miscalculation?
The Illusion of a Clear Target
The Manhattan Project had one goal: build a bomb. The scientists involved understood the physics, had a clear plan, and could measure progress.
AGI is different. There is no fixed target, no shared definition of what “success” looks like. What do we mean by intelligence? High scores on standardized tests? Artistic ability? Empathy? Without clear benchmarks or consensus, AGI becomes a moving target.
And while nuclear science relied on observable physical phenomena, AGI’s foundation is more ambiguous. How will we know when we’ve succeeded, if we don’t even know what we’re measuring?
Why the US Is in a Hurry
In Washington, AGI is increasingly seen through a geopolitical lens. Rising competition with China has heightened the sense of urgency. In 2023, the US-China Economic and Security Review Commission submitted a report to Congress urging massive investment in AGI, likening it to a modern-day Manhattan Project.
OpenAI co-founder Greg Brockman has called for rapid expansion, leading huge supercomputer efforts while publicly pushing for acceleration. Under the Trump administration, this momentum intensified. AGI is now seen by some as a strategic weapon, and the US seems unwilling to fall behind.
The Risk of the Wrong Analogy
Not everyone agrees with this approach. A group of influential voices, including Scale AI CEO Alexander Wang, former Google CEO Eric Schmidt, and Center for AI Safety Director Dan Hendrycks, published a report titled “The Super Intelligence Strategy.” Their warning is clear:
“Moves to develop a super weapon will pressure rival states to respond aggressively, increasing global instability. Let’s not forget, the Manhattan Project didn’t lead to lasting peace.”
Their concern is that framing AGI as an arms race, something to win at all costs, may lead to the development of systems too powerful to control. And the world won’t have the luxury of second chances.
From the report:
"Launching a Manhattan Project for AGI assumes rivals will quietly accept long-term imbalance or devastation. But that assumption is flawed. A project aimed at dominance is likely to provoke countermeasures, escalating tension and undermining global stability.”
Schmidt’s name on the report is especially notable. Not long ago, he had been an outspoken advocate of aggressive US competition with China in advanced AI. In a recent essay, he even described DeepSeek as a turning point in that race.
A New Concept: Mutual AI Failure
The report introduces another key idea: Mutual AI Failure. This describes a scenario in which rival nations build hostile AGI systems, refusing to shut them down, leading to a new kind of uncontrolled arms race.
The Pentagon has already begun integrating AGI into military planning. China and Russia are closely observing, and rapidly building their own systems. As this escalates, AGI becomes not a shared scientific endeavor, but the frontline of a new cold war.
The Third Way: Responsible AGI Strategy
According to the report, today’s AI politics fall into two extremes. On one side, the doomsayers believe the only solution is for all countries to slow down. On the other, the optimists insist development should speed up, assuming good outcomes will follow.
The authors argue for a third path. Instead of obsessing over “winning,” nations must focus on building systems that are controllable and safe. The US, they say, should lead not by racing ahead, but by discouraging risky development elsewhere.
That means expanding cyber capabilities to neutralize adversarial AGI projects, and tightening access to advanced chips and open-source models. In other words, security first, not just supremacy.
A Civilizational Choice
Personally, I believe unchecked AGI development could become a technological disaster if we are not careful. Unlike nuclear weapons, once control over AGI is lost, getting it back may prove impossible.
A super intelligent system would influence decisions that shape every aspect of life. And we’ve already seen how basic, rule-based algorithms, for instance social media algorithms, can impact behavior and society. If even those systems can distort our lives, what happens when we hand the steering wheel to something vastly more capable?
History has shown this again and again. Rushing into power without responsibility carries immense cost. In the age of AI, we must remember the lesson of Oppenheimer.
The road to disaster is often paved with ambition and good intentions.
And this decision may end up in the hands of leaders like Donald Trump and Xi Jinping.
How we handle AGI will define the century ahead. Will we charge ahead blindly, or proceed with care?
China and the US are making their moves.
The rest of us are watching, holding our breath.