WASHINGTON –
Last summer, the Trump administration held an event titled “Winning the AI Race,” where it unveiled its AI Action Plan. Like the billion-dollar data-center deals announced during U.S. President Donald Trump’s trip this past May to the Persian Gulf, the plan is meant to enhance American leadership in AI. But since neither the plan nor those earlier announcements mentioned human rights, it’s fair to question what it even means for the U.S. to “win” the AI race.
Many in Washington and Silicon Valley simply assume that American technology is inherently — almost by definition — aligned with democratic values. As OpenAI CEO Sam Altman told Congress this past May, “We want to make sure democratic AI wins over authoritarian AI.” This may be a good sentiment, but new technological systems don’t protect human rights by default. Policymakers and companies must take proactive steps to ensure that AI deployment meets certain standards and conditions — as already happens in many other industries.
Recent reports from the United Nations Working Group on Business and Human Rights, the U.N. Human Rights Council and the Freedom Online Coalition have called attention to the fact that governments and companies alike bear responsibility for assessing how AI systems will affect people’s rights. Existing international frameworks require all businesses to respect human rights and avoid causing or contributing to human-rights abuses through their activities. But AI companies, for the most part, have failed to acknowledge and reaffirm those responsibilities.






