Artificial intelligence experts at UCLA said AI deregulation creates cultural challenges and has significant business implications following a Trump administration executive order.
One of President Donald Trump’s executive orders – executive order 14179 – passed in January and revoked previous AI regulation policies that he described as “barriers to American AI innovation.” Titled “Removing Barriers to American Leadership in Artificial Intelligence,” the order places a high emphasis on global leadership and competition and deregulating AI.
An emphasis on framing AI development primarily through a lens of national competition and economic leadership is problematic, said Ramesh Srinivasan, a professor of information studies.
“Any attempt to reign, to regulate or sort of direct AI in a nationalist direction means that AI is likely to serve those purposes,” Srinivasan said. “Many of us know well that a certain kind of provincial or retrenched nationalism is not likely to serve American citizens well at all, given how globalized the economy is, given that many people in this country are from all around the world.”
He added that he believes a nationalistic approach is naive because in order to have AI development there needs to be supply chains that have more of a global focus.
“That’s sort of why we see old school, like Gilded Age-style land grab attempts by the administration,” he said. “This is kind of an old school, almost imperial, colonial model but it’s sort of retrenched in a sense because the Gilded Age functions in a world that was far less globalized than today.”
Another concern is the possibility of “xenophobic, nationalistic” AI, Srinivasan added, because AI development is highly influenced by the values and priorities of the governing state, not just its developers.
AI could either be divisive or democratizing in the future, Srinivasan said. He added that he believes this current debate around AI is similar to those about social media in the last decade.
John Villasenor, a professor of electrical engineering, law, public policy and management, said he is concerned about the overregulation of AI. He added that there is a tendency to regulate AI out of fear, where negative outcomes are fixated on.
“I don’t think the question is, ‘What are the bad things that could happen with AI?’ And let’s regulate all of them,” Villasenor said. “The question is, ‘What are the bad things that could happen with AI that aren’t already addressed under existing non-AI specific frameworks?’ And if you identify that subset of problems, well then that’s an area where it may make sense to legislate.”
The future of AI was also explored at the UCLA Anderson School of Management Healthcare Analytics Symposium 2025. Speakers from AI organizations, such as Netomi and Mila Health, discussed real-world applications of AI in diagnostics and patient engagement – but also discussed its role in data privacy and governance.
With organizations integrating AI into the workplace, it is important to be explicit about what employees are allowed to do and what is prohibited, said Khalil Smith, who worked at Apple for over a decade, at the symposium.
“Most of us are not against change. We are against change that is imposed upon us,” Smith said. “When you work in an organization and largely in a lot of institutions like health care and others, people feel like they’ve lost a sense of autonomy and a sense of control, and that creates backlash. That creates this pushback.”
Smith added that some organizations identify how employees can experiment with AI and the desired outcomes. Other companies do not, Smith said, leading to fear as workers do not want to be punished for using AI.
Regulation is essential, Srinivasan said, especially to improve transparency regarding what data is training AI and to evenly distribute the economic benefits of AI across a wide population. Srinivasan said regulation does not necessarily present barriers to AI development.
“Regulation is not about stymieing growth,” he said. “It’s about directing it.”