The Trump administration has been clear in its support for artificial intelligence. From the AI Action Plan to GSA’s OneGov deals to emphasizing access to large language models, the White House’s message to agencies and industry has been to accelerate use of these tools.
But two recent events are clouding up that year-long push to use artificial intelligence.
The General Services Administration’s draft update to the terms and conditions of its schedule contracts for AI tools and the Defense Department’s decision to label Anthropic’s Claude LLM as a national security risk are causing confusion and concerns across the federal community.
Multiple industry experts say GSA and DoD’s actions are sending mixed messages to industry and agencies alike about the role AI providers should, and can, play in the federal market going forward.
]]>
“The AI Action Plan comes out from administration last July and it’s very hands off. It was one of the most vendor-friendly overarching policies we’ve ever seen,” said Jessica Tillipman, associate dean for government procurement law studies at the George Washington University, in an interview with Federal News Network. “But the follow up since the action plan has been different. The Anthropic issue and now GSA’s AI draft rules run in the opposite direction. Both send a really bad message to industry. The administration wants to buy like commercial companies buy, but vendors see these behaviors with Anthropic, which is troubling in isolation, and when you put it together with GSA’s AI draft rules, there is a lot of concern about long-term damage to the AI marketplace.”
The DoD-Anthropic dispute centers on the company’s concerns about how the Pentagon is using their products. Anthropic is specifically worried about how DoD would use Claude for mass surveillance of Americans or autonomous weapons. After weeks of negotiations, DoD designated Claude a supply chain risk and President Donald Trump ordered all agencies to stop using the LLM.
The Pentagon now has 180 days to remove Anthropic’s products from its systems and defense vendors have to certify they are not using Claude in DoD work.
“At a time when demonstrated by the Iran war, where they are using AI for all kinds of targeting and it’s been pretty remarkable, the administration is creating an environment where if someone disagrees with you on the terms of a contract and you can’t negotiate a resolution, the answer is to drive the company out of the government and possibly business? What message is that to all American AI firms?” said one industry observer, who requested anonymity for fear of retribution.
GSA adds to confusion
The same day DoD designated Anthropic a supply chain risk, GSA came out with a refresh to the terms and conditions of its schedule contracts that caused even broader concerns.
GSA’s nine-page draft clause would require vendors providing AI tools and services to meet a host of specific requirements, including ensuring that only “American AI systems” are used, disclosing all AI systems used in performance of the service, requiring the government to own all data and all custom developments and holding third-party vendors accountable for meeting the requirements of the clause.
GSA is accepting comments on the proposed changes, and just extended the comment period two extra weeks to April 3 at the request of vendors.
]]>
“This is a commercial program and GSA is putting these terms on all the contractors that appear to be inconsistent with terms of use in commercial practice. The question is how will companies react? And what is the ability to negotiate these terms and conditions?” the industry observer said. “In some ways, the government is putting a baseline down and saying, here is what we want and you can negotiate. But then you get into different terms for different companies. This language is a new barrier to entry. Companies have to jump through this hurdle with regards to the AI clause to get on schedules. Are some companies going to say they’d rather not do that? Probably.”
Industry experts say the White House is trying to use the government’s largess to drive the AI market. For example, through the OneGov deals, agencies have access to the top AI tools at a low cost. GSA is trying to reduce the barriers to entry for both vendors and agencies alike.
And just last week, GSA and the National Institute of Standards and Technology detailed a new partnership to strengthen how the agencies evaluates artificial intelligence models and services.
Through the collaboration, GSA says NIST’s Center for AI Standards and Innovations (CAISI) will “provide tooling and methodological guidance to help GSA evaluate advanced AI models, select and interpret benchmarks, and conduct hands-on testing within real federal workflows. GSA and NIST will also create practical resources, including clear evaluation guidelines and checklists, that other agencies can use to assess AI tools for their own missions.”
Emails to GSA seeking comment on its draft rules and the seemingly mixed message it’s sending were not returned.
“It would be very difficult for organizations to comply with the draft AI terms of service as currently written for any number of reasons,” said a second industry observer, who too requested anonymity for fear of retribution. “What we would probably see is if this overarching message from administration continues is vendors and agencies would move away from the Schedules as a pathway to sell or buy AI tools and both would look for other vehicles to consume and sell AI tools.”
The industry expert added GSA is undermining its OneGov efforts to date with these draft rules as most AI vendors would find it too difficult to sell through these contracts.
Challenge of managing AI tools growing
Outside of GSA, the Office of Federal Procurement Policy is leading the overhaul of the Federal Acquisition Regulation with a goal to reduce complexities, promote commercial practices and, again, reduce barriers to entry for vendors.
The Office of Management and Budget has issued three memos over the last year focusing on cutting red tape and regulations around AI.
]]>
Emails to OMB seeking comments on the conflicting messages the administration is sending were not returned.
The first industry observer added that the government may be overestimating its ability to shape the AI market.
“What new company with a new technology wants to face this possibility and enter the federal market,” the expert said. “These actions are not something that says ‘we are open for business.’”
Rebecca Pselos, a former senior analyst at the Government Accountability Office who worked on acquisition issues and is now the chief operating officer for Government Procurement Strategies, said these mixed messages are likely a symptom of a bigger challenge around managing AI tools.
“I think the government is trying to wrap its head around how are they trying to make sure AI is secure. It’s an important question. How do you protect the supply chain and protect the AI tools from adversaries?” Pselos said in an interview. “At same time, there are a lot of things that industry didn’t expect to see, some of which were significant moves away from commercial practices. I think GSA was assuming contractors are equipped to certify their AI tools, but they haven’t been asked to dive down into their AI until this issue has been raised. So now the government is asking them to understand how their AI is being used, how it’s built and what their service providers know. None of that has been passed on to them like data libraries and code. I think the government is asking for a whole new level of maturity that the industry hasn’t been asked to deliver on.”
Tillipman said on the surface, GSA’s direction of putting some more oversight and governance on AI tools is the right one. But she said instead of using a scalpel, GSA brought a hammer.
“GSA gets the diagnosis right. The government is buying AI with too little visibility into how systems use federal data, how the tools are using the data and how to avoid vendor lock in. It’s clear there is too little control over the value that can be created or obtained through government use,” she said. “The problem to me is governance, but they blow right past it. They go way beyond what is necessary to get what it is trying to achieve, and it starts to look a lot less like governance and more like government control.”
Experts agreed that it’s unclear what the government’s ultimate goals are around AI governance.
Pselos said striking the right balance between understanding the underlying technology and data that go into AI tools and securing them is difficult.
“There is a tension there that has to be balanced. We have to be secure but not stifle innovation. I don’t think that stifling innovation is GSA’s aim. They want innovation from commercial companies and want to build industrial base and be a leader. But it’s the delicate balance of how you bring security along without limiting the industrial base,” she said. “My broad concern is we all know we need secure AI, but the question is how do we get there? We haven’t had a chance for government and industry to collaborate and talk about how to get there in a manner that we aren’t leaving companies behind. When you see language like what GSA put out in draft, industry may be saying, if we will be restricted for how we develop and use it, why do business with government?”
Copyright
© 2026 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.






