In the span of 48 hours, the White House floated the idea of a Food and Drug Administration- (FDA) style pre-deployment vetting regime for frontier AI models, then immediately walked it back.
On Tuesday, POLITICO reported that the administration was circulating a 16-page draft executive order with provisions on cybersecurity, open-weight models, federal contracting, and a pre-release review system for the most capable AI systems. On Wednesday, National Economic Council Director Kevin Hassett previewed the thinking:
We’re studying possibly an executive order to give a clear road map to everybody about how this is going to go and how future AIs that also could potentially create vulnerabilities should go through a process so that they’re released to the wild after they’ve been proven safe. Just like an FDA drug.
By that evening, Chief of Staff Susie Wiles walked that back, saying the government is “not in the business of picking winners and losers” and would empower “America’s great innovators, not bureaucracy.”
The whiplash matters less than what produced it: an administration caught flat-footed by a capability jump, now scrambling to assemble a response.
The Mythos Moment
The catalyst is Anthropic’s new Mythos model, which early testing suggests can find and exploit software vulnerabilities. Anthropic has not released it publicly; it is being shared with a small set of tech, financial, and security organizations so defenders can patch holes. To give a sense of these capabilities: Mozilla reported that the Firefox team fixed more security bugs in April using Mythos than in the past 15 months combined.

The “Mythos moment” is not a one-model phenomenon. OpenAI also released a limited preview of GPT-5.5-Cyber, which finds and patches similar vulnerabilities.
A Self-Inflicted Blind Spot
Part of the reason the administration was caught off guard is its own doing. In March, Defense Secretary Hegseth designated Anthropic as a supply chain risk to national security after the company refused to allow the Department of War to use Claude for domestic surveillance and for fully autonomous weapons.
Now federal agencies are reportedly clamoring for access to Mythos, and the administration is quietly working to establish a review board to revisit the designation. That is the right move, but also an admission of the underlying problem: it is genuinely difficult for a frontier lab to give the government a clear-eyed picture of what is coming when it has been formally labeled a national-security risk on par with foreign adversaries.
The Need for Institutional Capacity
There’s an irony in all this. Many ideas now drawing serious attention—pre-deployment evaluation, capability assessments, structured information sharing—had some merit and deserved refinement, not rejection. They were already taking shape inside what is now known as the Center for AI Standards and Innovation (CAISI) under the Biden EO that Trump revoked on Day One. This week’s CAISI agreements with Google DeepMind, xAI, and Microsoft, on top of those with OpenAI and Anthropic, show the administration now racing to rebuild that capacity under a tighter timeline.
The lesson isn’t that the Biden EO should have been preserved wholesale. It’s that AI governance built on executive orders gets revoked when power changes hands, while the underlying technical work survives, because national security needs don’t care which party is in office.
The FDA frame was always a stretch on the merits since it regulates stable, well-understood products in a mature ecosystem, none of which describes frontier AI. AI systems are dynamic, their risks are uncertain and difficult to measure, and their behavior shifts between testing and deployment. But the deeper problem isn’t picking the right regulatory analogy. It’s that the government has no reliable way to understand frontier capabilities as they emerge, which means every policy response is reactive by default.
A Better Frame
The shape of a better approach is closer to a light-touch coordination framework. It starts with deeper coordination between government and frontier labs because this is fundamentally an information problem, and the government needs early, structured, classified-where-appropriate visibility into what labs are seeing in their evaluations. It means a defensive-first cybersecurity track that gets Mythos-class capabilities into the hands of the Cybersecurity and Infrastructure Security Agency, the National Security Agency, and critical-infrastructure operators faster, not slower; tapping the intelligence community to pre-assess models is directionally right. It requires a meaningfully strengthened CAISI with the staffing, technical depth, and authorities to serve as a real evaluation partner rather than a convening shop. And we need more pressure on the frontier labs to extend transparency requirements beyond CBRN and cyber to capability domains that get far less attention than they deserve, including the growing concern around mental-health risks.
This week exposed a real underlying gap: The government still lacks the systems, relationships, and technical capacity to see frontier capabilities coming. Until that changes, every Mythos moment will be a surprise.






