Trump’s AI policy shifts focus to ‘high impact’ use cases

Trump’s AI policy shifts focus to ‘high impact’ use cases


The Trump White House is focusing federal artificial intelligence efforts on “high impact” use cases and directing agency chief AI officers to accelerate the adoption of AI technologies.

The new AI policy is laid out in a pair of April 3 memos signed by Office of Management and Budget Director Russell Vought. One focuses on the acquisition of AI, while the other memo focuses on the “federal use” of AI.

The latter memo gives agencies 180 days to develop an AI strategy for “identifying and removing barriers to their responsible use of AI and for achieving enterprise-wide improvements in the maturity of their applications.”

In a fact sheet, the White House said the goal of the AI policy is to “empower AI leaders to remove barriers to AI innovation.” Agency chief AI officers, the fact sheet continues, are “redefined to serve as change agents and AI advocates, rather than overseeing layers of bureaucracy.”

]]>

The White House also criticized “the risk-averse approach of the previous administration.” President Donald Trump has already rescinded a Biden-era AI executive order and an associated implementation memo. 

But a former senior AI leader, who requested anonymity to speak candidly, said the new AI policy represents a “smart evolution” on the Biden administration’s approach. The memo continues to require agencies to maintain AI use case inventories, apply risk management practices, and rely on chief AI officers to lead agency AI efforts.

One major change is the shift to focusing risk management on “high-impact” AI use cases. Vought’s memo directs agencies to apply minimum risk management practices to any high-impact AI use case, including conducting pre-deployment testing and completing an AI impact assessment.

The AI policy memo includes 15 distinct categories of high-impact AI use cases, including safety-critical functions for critical infrastructure, medical devices and healthcare diagnoses, and making determinations for government benefits.

Under the Biden administration, OMB had broken out such AI use cases into two areas: “rights” and “safety” impacting, respectively. Each had its own distinct lists of categories and risk management practices.

The former senior official said the scramble to meet OMB’s deadlines for documenting how they were addressing risks across those two areas “was a massive effort across every agency.”

“It’s good to have an opportunity to streamline some of that,” the official said.

]]>

Vought’s memo gives agencies 365 days to document their compliance with the requirements for high-impact use cases. Agencies are required to “safely discontinue” any noncompliant high-impact use cases.

August Gweon, an associate in Covington and Burling’s Washington office, said the new AI policy gives agencies more discretion in how they approach risks.

“These risk management practices are worded in a way that allows agencies to think, is this aligned with the rapid adoption and use of AI to further the agency mission?” Gweon said.

Vought’s memo also directs agencies to provide an option for end users and the public to submit feedback on any high-impact AI use case.

Taka Ariga, the former chief AI officer at the Office of Personnel Management, applauded the memo’s focus on public feedback and the continued emphasis on the chief AI officer role, but raised concerns about ongoing agency workforce cuts.

“The OMB’s AI memo’s focus on delivering measurable value from use of AI is encouraging, as is its specific focus on public consultation, but I am concerned about agencies’ ability to deliver on this promise given the recent workforce reductions impacting AI talent,” Ariga told Federal News Network.

The recent Department of Health and Human Services layoffs included several IT and cyber teams. The Department of Homeland Security is also expected to target its new “AI Corps” as part of broader headquarters cuts.

The former senior official pointed out that good AI policy goes beyond technology talent. Agencies also rely on legal counsel, civil rights and civil liberties teams, and privacy offices, among other areas, to implement AI projects, especially high-impact use cases. Some agencies have already targeted those types of offices.

“Even with the streamlining of some of the more onerous requirements and prior policy, this all takes people to implement,” the official said.

]]>

The Trump administration’s approach to using AI also reportedly leans heavily on the Department of Government Efficiency initiative. DOGE is not mentioned in Vought’s memo. But according to multiple news reports, DOGE teams have been using AI to rewrite agency code and make decisions about terminating federal employees, among other areas.

Lawmakers and good governance groups have raised concerns that DOGE is skirting privacy and security requirements to connect AI systems to agency data. And Vought’s memo makes clear that “high-impact” use cases include areas such as accessing sensitive agency IT systems, detecting fraud and misuse in government benefits, and making federal employment decisions.

Quinn Anex-Ries, a senior policy analyst with the Center for Democracy and Technology, said the OMB guidance is “encouraging.” But he pointed to how DOGE’s reported use of AI may not comply with transparency and risk management requirements.

“The true test will be how OMB works with all federal agencies, including DOGE, to implement these critical requirements in a timely and transparent manner,” Anex-Ries said.

Copyright
© 2025 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.



Content Curated Originally From Here