Trump’s AI order rewrites who gets to govern technology

Trump’s AI order rewrites who gets to govern technology


For years, America’s approach to regulating new technology has followed a familiar pattern. Congress drags its feet. States step in. Courts sort out the mess. Eventually, after enough public pressure, Washington puts a national framework in place.

​Data privacy followed this pattern. Congress debated a national privacy law for years without acting. California passed the Consumer Privacy Act; other states followed; companies objected in court; only then did Washington consider federal action, after the market had adapted.

​President Donald Trump’s new executive order on artificial intelligence (AI) tries to skip all of that. Rather than ask Congress to pass an AI law, the administration uses executive power to push states aside with lawsuits, funding threats, and federal agency overrides.

Framing AI as a race

​The order opens with language about America’s “national and economic security” and the need for “dominance” in AI. It warns that the US is in a “race with adversaries” for AI supremacy. This playbook isn’t new. Washington took a similar approach with TikTok, framing it as a national security threat and pushing for bans and divestment.

​While the order doesn’t name a specific country, it doesn’t need to. In the US, this kind of language has always been well-understood. When policymakers talk about technological “races,” “supremacy,” and “dominance,” they are almost always talking about China. Over the past decade, China’s heavy state investment in AI, semiconductors, and data infrastructure has increasingly shaped how US policymakers talk about technology. AI is treated as a strategic asset, at par with defense.

In recent years, the US administration has moved again and again to stop advanced AI chips from going to China, arguing that the same technology powering chatbots and data centers could also boost China’s military and surveillance systems. The clearest example is Nvidia.

The US government blocked the company from selling its most powerful AI chips to Chinese customers, and when Nvidia tried to work around the rules by designing toned-down versions that complied on paper, those chips were later pulled into tighter export controls too.

That sense of urgency is politically useful. It lets the administration argue that public consultation is a luxury America can’t afford. When everything becomes a race, shortcuts start to look justified.

​That may make sense in a race. But it also explains why this order is less about building durable rules for AI and more about clearing obstacles, especially when those obstacles come from states trying to regulate technology in the absence of federal action.

​Fall in line, or pay the price

​Rather than proposing legislation, the administration is creating an AI Litigation Task Force at the Department of Justice (DoJ). The order directs the DoJ to set up a task force within 30 days. Its only job is to challenge state AI laws that don’t align with White House policy. The federal government isn’t just reserving the right to intervene if states cross constitutional lines. It’s actively preparing to hunt for state laws to sue.

​The order also gives the Commerce Department 90 days to compile a list of “onerous” state AI laws and directs federal agencies to rethink how they award grants. States with AI laws the administration considers “onerous” could lose access to certain federal funds, including parts of the Broadband Equity, Access, and Deployment (BEAD) program. Agencies are also told to consider conditioning discretionary grants on states either not enforcing their AI laws or agreeing, in writing, not to enforce them while the money flows.

​States aren’t meant to be punished for trying things out. But here, the White House is using federal money as leverage to make states think twice before passing or enforcing their own laws.

Filling a vacuum Congress left open

The US still has no comprehensive AI law. Instead, older consumer protection and civil rights statutes fill the gap. States like California and Colorado have tried to move faster.

In 2024, lawmakers passed SB 1047, a high-profile AI safety bill that would have required developers of large, powerful AI models to conduct safety testing and implement safeguards against catastrophic harms. Governor Gavin Newsom ultimately vetoed it, citing concerns about stifling innovation, but the debate itself signaled how seriously the state was taking AI risk.

​Colorado has gone further. The Artificial Intelligence Act, SB 24-205, passed in 2024 and set to take effect in 2026, requires companies deploying high-risk AI systems to assess and mitigate algorithmic discrimination. It doesn’t ban AI or dictate outputs.

It asks companies to identify risks, document impacts, and take reasonable steps to prevent harm. Trump’s order has named this law specifically, saying that this law banning “algorithmic discrimination” may even force AI models to produce false results in order to avoid a “differential treatment or impact on protected groups.”

​These laws aren’t radical. They’re imperfect attempts to address real harms using the tools states have. And it’s how countries around the world are regulating AI today. But Trump’s executive order treats state laws instead as obstacles to be cleared.

Free speech for software

​One of the administration’s main complaints is that some state laws might force AI systems to change their outputs in ways that introduce “ideological bias” or even falsehoods. The order warns against laws that require models to alter “truthful outputs.” It argues that such rules could violate free speech or amount to deception under federal law. There’s just one problem. No one explains what a “truthful output” actually is.

​AI models generate probabilities. They reflect the data they’re trained on, which often includes historical bias, social inequality, and flawed assumptions. State laws aimed at limiting discriminatory outcomes aren’t asking AI to lie. They’re asking developers to take responsibility when automated systems produce harmful results. But the order flips that logic on its head.

​The executive order leans heavily on the First Amendment. It repeatedly warns that state AI laws could violate free speech by compelling disclosures or forcing changes to AI outputs. The Federal Trade Commission (FTC) is even told to spell out when state laws that alter AI outputs are preempted because they require “deceptive acts or practices.” This is a strange place to land.

​The First Amendment exists to protect people and not software models. Yet here, free speech is being used as a shield for AI systems and their makers, not for the individuals affected by algorithmic decisions. When an automated hiring tool weeds out qualified candidates, as it did at Amazon in 2018 after it shut down an experimental AI hiring tool that systematically downgraded resumes from women, the order’s concern isn’t the person affected by that decision. It’s whether the AI’s output was changed in the name of fairness. That’s a quiet but telling shift in priorities.

​The administration argues that state-level AI laws create a messy patchwork. That’s not wrong. Fifty different rules can be hard to navigate. But there’s a reason states stepped in. It’s because Congress hasn’t. Normally, when federal law overrides state law, Congress is involved. Hearings are held. Votes are taken. Here, preemption is happening through agency action and executive pressure. The Federal Communications Commission (FCC) is directed to review national AI disclosure rules that could override state efforts. The FTC, meanwhile, is being asked to use consumer protection law to take down state AI mandates it doesn’t like.

​There is precedent here, too. After the FCC repealed federal net neutrality rules in 2017, it moved in 2018 to block states from passing their own protections, even as Congress failed to act. California was sued, and years of litigation followed. AI now appears headed down the same path.

​Supporters of the executive order will say all of this is necessary to keep the US competitive. AI moves fast. Regulation slows things down. But innovation doesn’t have to mean governing by executive order. What the administration is really doing is choosing control over consent.

In the short term, it may simplify life for AI companies. In the long term, it risks eroding public trust in both technology and government.

A country that wants to lead in AI should be confident enough to debate its rules openly, and not impose them through lawsuits and grant conditions. When Washington starts policing the states for Silicon Valley, it’s fair to ask who the system is really being built for, and who’s left out of the conversation.



Content Curated Originally From Here