Beth Fukumoto: Ethics In AI Isn’t Just A Slogan Anymore

Beth Fukumoto: Ethics In AI Isn’t Just A Slogan Anymore


The Anthropic fight matters here because it reminds us what’s at stake and why Hawaiʻi needs to implement solid, ethical AI policy.

If you paid for an AI tool this week, you made a political choice, even if you did not realize it. Here is why that matters, and what we can do next.

The Pentagon gave Anthropic an ultimatum: either open your AI models for unrestricted military use or lose your federal contract. Anthropic’s CEO, Dario Amodei, refused. He specifically objected to two uses: mass domestic surveillance of Americans and fully autonomous weapons systems without human oversight.

In response, President Trump ordered all federal agencies to stop using Anthropic technology and gave the military six months to phase it out. Then, Defense Secretary Pete Hegseth labeled Anthropic a national security “supply-chain risk,” a label usually reserved for foreign adversaries, not American companies.

Illustration of Hawaii capitol with sun shining in the skyCivil Beat is focusing on transparency, accountability and ethics in government and other institutions. Help us by sending ideas and anecdotes to sunshine@civilbeat.org.

Hours later, OpenAI announced a new deal with the Pentagon to use its models on classified networks. While OpenAI says they have kept the ethical safeguards in place, the Pentagon has not explained why it would have accepted these limits from OpenAI but blacklisted Anthropic for the same.

Regardless, I’ve since shifted my subscription to Anthropic.

Anthropic began in 2021 when researchers left OpenAI over concerns about AI safety. It’s the only major AI lab that still has all its original co-founders, showing its strong culture despite commercial pressures. Amodei has said AI could help protect individual rights and democracy, but that’s not guaranteed. It takes real effort from companies, governments and citizens. This isn’t just a slogan. It’s the kind of stance that led to the conflict we just saw.

As I pointed out in a previous column, consumer choices, when they’re visible and deliberate, are a form of accountability. Hawaiʻi, unusually plugged into the online economy, has more of that leverage than we typically use. So, as a household, we made a conscious choice to support a company that upholds its ethics, even though so many others don’t.

But consumer pressure alone isn’t enough.

FILE - Dario Amodei, CEO and co-founder of Anthropic, attends the annual meeting of the World Economic Forum in Davos, Switzerland, Jan. 23, 2025. (AP Photo/Markus Schreiber, File)Dario Amodei, CEO and co-founder of Anthropic, has refused to allow the Pentagon to use his company’s for unfettered military applications. (AP Photo/Markus Schreiber, File)

Hawaiʻi was already behind on AI governance before this week. The 2026 session offered real opportunities, but so far isn’t seeing much progress. The federal government won’t set the rules here. An administration that blacklisted a domestic company for refusing mass surveillance won’t create the AI rules people need.

So what should Hawaiʻi actually do? Pennsylvania gives us one useful starting point. The state ran a pilot program with ChatGPT and published its findings: government employees estimated they saved an average of 95 minutes per day using the tool, while spending only about 35 minutes a day on it. That’s real.

But the same report found the AI invented non-existent legal cases and fabricated job qualification requirements — errors that only human review caught. Pennsylvania’s conclusion: AI works as a tool, but requires a human in the loop to stay accurate and ethical.

When it comes to AI adoption, what will our state policy be? Are we deliberately deciding which AI vendors to contract with, and why? These are the questions legislators and the public must consider. We can choose how we use these technologies and which tools we fund with state money.

We should also consider data protection and look to Utah for a solid legislative model. Starting in July, the Digital Choice Act will require social media platforms to let users download and transfer all their data, including social connections, comments and interaction history. It also requires platforms to build interoperability so users aren’t trapped in closed systems. This isn’t just a ban on one bad practice — it’s a structural change that gives users real control instead of relying on platforms’ goodwill.

Hawaiʻi needs these protections, too. And we’re still behind many other states in creating them.

AI regulation is one of the few issues with real bipartisan support right now. States are making chatbots identify themselves as non-human, limiting AI in insurance and protecting children from AI companion apps. These laws are passing in both red and blue states. Utah is not a progressive legislature. The desire for rules exists. Hawaiʻi’s leaders just need to act.

The Anthropic fight matters here because it reminds us what’s at stake. This episode is not just about one company’s ethics. It’s a signal to both consumers and lawmakers about the broader consequences of unchecked technology policy. A company refused to build tools for domestic surveillance or autonomous weapons without human oversight. The administration called that obstruction. That’s a chilling development. But we’re not powerless.

Consumers have real power to influence AI’s direction with their purchasing choices. At the same time, lawmakers are responsible for setting the guardrails that protect society from potential harm. For Hawaiʻi to truly shape its digital future, both forms of action are essential and urgent.

Sign Up

Sorry. That’s an invalid e-mail.

Thanks! We’ll send you a confirmation e-mail shortly.



Content Curated Originally From Here