The April 7 announcement by Anthropic — that its new Mythos model was far more powerful and thus far more dangerous than any previous AI tool — increasingly appears to be a pivotal moment in human history. On Tuesday, The New York Times reported that it had set off “global alarms” at central banks, in intelligence agencies and among government leaders across the world.
Driving the reaction, the Times said, was independent confirmation that Anthropic was right: Mythos is “uncannily capable of finding and exploiting hidden flaws in the software that runs the world’s banks, power grids and governments.” Reporters Paul Mozur and Adam Satariano wrote that this “illustrated a reality that AI researchers have long warned about mostly in theoretical terms: Whoever leads in building the most powerful AI models will gain outsize geopolitical advantages. Major AI breakthroughs are beginning to function less like product launches and more like weapons tests.”
This gives a fascinating new dimension to the dispute between the Pentagon and Anthropic over the $200 million contract the San Francisco-based company signed in July 2025 to assist with U.S. military needs. In December, it was reported that Anthropic CEO Dario Amodei balked at the Pentagon’s demand that its AI models could be used by the U.S. government for “any lawful purpose.” Amodei cited the concern that Anthropic technology would be used for fully autonomous lethal weapons whose decisions were made without human involvement — and the potential for AI to be used for mass domestic surveillance on an unprecedented scale.
This led President Trump to question Anthropic’s patriotism and Defense Secretary Pete Hegseth to not just cancel its 2025 contract but have Anthropic formally designated as a “supply chain risk” on March 5. This prohibited any company doing business with the U.S. military from doing business with Anthropic — a seemingly profound threat to Anthropic’s future, given the Pentagon’s status as the world’s deepest-pocket customer for firms selling advanced technology.
Anthropic sued in response. But what it didn’t do is panic at the prospect that the Pentagon’s sanctions would cripple Anthropic to the great benefit of rival firms led by OpenAI.
This suggests that Amodei believed Anthropic had more leverage on the Pentagon than the Pentagon had on it — the CEO of a company founded just five years ago shrugging off a threat from the world’s most powerful and best-funded entity.
And this in turn suggests that the better fictional model to view AI’s rise is not the “Terminator” movies, in which Skynet becomes self-aware and plots to eradicate humanity. Instead, it’s James Bond thrillers like “Moonraker,” “Tomorrow Never Dies” and “No Time to Die” in which wealthy geniuses believe their technology makes them able to stand up to or overcome any government.
This is especially so because this week in the Trump administration-Anthropic showdown — after Amodei met with White House chief of staff Susie Wiles and Treasury Secretary Scott Bessent to discuss Mythos — the president blinked.
“They’re very smart, and I think they can be of great use,” he said Tuesday on CNBC. “I think we’ll get along with them just fine.”
It is of course impossible to know where this will all lead. But after seeing a president back down from a tech CEO in such a freighted dispute, the old rules start to feel very far away.






