In a sharply worded rebuttal that adds heat to an already simmering Silicon Valley rivalry, Anthropic CEO Dario Amodei has accused NVIDIA CEO Jensen Huang of twisting his words and intentions on AI governance and safety. Speaking on the Big Technology Podcast, Amodei slammed Huang’s claim that Anthropic believes only it should be allowed to develop advanced AI systems.
“That’s the most outrageous lie I’ve ever heard,” Amodei said, visibly frustrated. “I’ve said nothing that anywhere near resembles the idea that this company should be the only one to build the technology.”
The comments come amid a widening philosophical divide in the tech world—between those calling for controlled, measured AI deployment and others advocating for open innovation at full throttle.
— BigTechPod (@BigTechPod)
A Battle of Beliefs
The feud escalated after Jensen Huang publicly accused Amodei of advocating for exclusive control over AI development. During VivaTech 2025 in Paris, Huang told reporters, “He thinks AI is so scary, but only they should do it,” referring to Amodei’s lobbying for export controls on semiconductor technology and repeated warnings about AI’s disruptive potential.
Amodei has indeed sounded the alarm on AI’s capacity to wipe out as much as 20% of entry-level white-collar jobs in the next five years—a prediction he shared with Axios earlier this year. Huang, on the other hand, has remained consistently upbeat, insisting AI will transform rather than destroy jobs.
“I pretty much disagree with almost everything he says,” Huang said at the summit.
Race to the Top vs. Race to the Bottom
On the podcast, Amodei elaborated on what he calls a “race to the top”—an approach he believes all AI developers should follow. “When you have a race to the bottom, it doesn’t matter who wins—everyone loses,” he said. “With a race to the top, it doesn’t matter who wins because everyone wins.” He pointed to Anthropic’s transparent policies, such as their “Responsible Scaling Policy” and open-access interpretability research, as proof that the company is not hoarding progress behind closed doors. Instead, he argued, these initiatives were designed to encourage safer practices across the entire industry.
“We’ve released our work so others can build on it,” Amodei said. “Sometimes that means giving up commercial advantages—but it’s worth it for the field to grow responsibly.”
A Clash Rooted in Business and Belief
There may also be financial motivations at play in this war of words. Amodei’s support for semiconductor export controls to China could hinder NVIDIA’s massive chip sales, particularly in the AI boom where demand for powerful GPUs is soaring. Huang, whose company stands to lose billions if such restrictions tighten, has not held back his criticism of Amodei.
Amodei, however, is adamant that the friction isn’t about limiting competition but about fostering responsibility in an industry where mistakes can have global consequences.
“It’s just an incredible and bad faith distortion,” he said of Huang’s allegation.
As the race toward superintelligence intensifies, the dispute between Amodei and Huang highlights an essential question: who gets to define “safe” in the age of AI?
While Meta, OpenAI, Google, and Anthropic continue pushing the frontiers of artificial intelligence, the real divide may not lie in model sizes or compute power—but in values. Should AI be guided by market dynamics and open-source contributions, as Huang believes? Or does it need more control and caution, as Amodei argues?






