AI regulations: A race to safeguard consumers | Colorado In DC

AI regulations: A race to safeguard consumers | Colorado In DC



As artificial intelligence is sprinting forward, many argue that the public policy to regulate the technology is falling behind.

And with the federal government playing catch-up, states are taking small steps to fill that vacuum, but several sectors, including attorneys general, argue that state-level efforts offer an inadequate, patchwork of rules when what’s really needed, they argue, is a uniform standard on the national and international stage.

Others cautioned against rushing to regulate without careful analysis of a proposed policy’s ramifications for businesses, consumers and companies’ freedom to innovate.   

Like the internet, U.S. Rep. Brittany Pettersen said the genie cannot be put back in the bottle.

Today, AI can be used in every industry in some form or another. Used right, advocates say, its potential is vast — the technology can make jobs faster, smarter, more efficient. When used for nefarious purposes, it can make scamming, defrauding or ruining people’s reputations faster and easier.  

On the international stage, Pettersen said the race to set AI policy is playing out between the United States and China, and she’s worried about the latter gaining the upper hand. Recently elected to her second term in Colorado’s 7th District, Pettersen said public policy to regulate AI must take center stage next year, when a new Republican Congress takes over.

U.S. Sen. John Hickenlooper, a Democrat in Colorado’s 6th District, looks to start making progress by year’s end.

“Great legislation starts with its first hearing, and I feel a great sense of urgency,” Hickenlooper said during a Nov. 19 Senate hearing focused on protecting consumers from AI deepfakes.

A “deepfake” refers to AI-generated images or videos that look real, a phenomenon further popularized by a viral Tiktok of a deepfake of “Tom Cruise” goofing around.



artificial intelligence ai robot



‘I don’t want a future where China’s leading on AI’

With Donald Trump winning the presidency and Republicans securing the majority in both chambers of the U.S. Congress, Pettersen said she is confident regulating AI will remain a priority, given both House Speaker Mike Johnson and House Minority Leader Hakeem Jeffries came together to create the AI Working Group in 2024.

Pettersen was appointed to the bipartisan group, which is exploring how AI impacts financial services and housing industries.

While there is bipartisan interest in developing AI regulations, Pettersen said a “dysfunctional Congress” creates challenges, especially in an election year, in which the working group’s progress was stalled to some degree.

“I really worry about areas like this where we need to be leading the way globally and making sure that China is not the one doing that,” Pettersen said. “I don’t want a future where China’s leading on AI. It needs to be the United States and we have to come together in Congress to bring comprehensive pragmatic, bipartisan solutions. It cannot matter (which party) has the majority. This needs to continue to be a bipartisan effort.”

Pettersen said it is vital that the U.S. is setting global standards that other countries will follow, noting that such safeguards are “critical for our national security and for consumers.”

When asked if China is already leading the U.S. in setting global policy, Pettersen said no, but admitted the race is on.

“China is making significant investments,” she said. “We can’t get behind them, and this is really going to define, you know, what the next 100 years looks like for global leadership, and we need to make sure that it’s people here in the United States that are benefitting not only from these technologies in their lives, but also the financial benefit of innovating and leading here in the U.S.A.”

Already, China is advocating that the United Nations take the leading role on global governance of AI, a move that could sideline the U.S.

While both China and the U.S. agree that AI poses risks, China has built extensive surveillance systems, which carry AI components, that tracks its citizens through chat apps and mobile phones. The U.S. has criticized China’s approach.

Pettersen said the risk to public interest continues to grow “exponentially” as companies develop AI technology at record pace. 

Comparing it to when the internet first came online, Pettersen said the U.S. must navigate the guardrails.



AI by state

The illustration shows how states are taking action to regulate artificial intelligence.



State laws are ‘second best’

Pettersen said besides focusing on the global aspect, setting national standards is needed as a preemptive guide for states to follow.

“The debate will continue, but right now, what we’re seeing is that without federal action, we’re seeing patchwork approaches across states,” the congresswoman said. “It makes it incredibly difficult, I think, for AI industries and how they’re navigating some of those regulations. So, I think a national standard will help give states those protections and guidance.”

Hickenlooper said states are moving forward and conceded it is a patchwork approach. He said some states are focusing on laws that protect election integrity, others on non-consensual intimate imagery, while some have done nothing at all.

In Colorado, Attorney General Phil Weiser told Colorado Politics he is taking AI regulation seriously while awaiting federal guidelines, calling state policy “second best.”

“The first best world is one where we have federal leadership and federal public policy frameworks in the areas of AI,” Weiser said. “If we can’t live in the first best world, the second-best world for us to live in is a world where states are providing that leadership.”

He added: “And I do prefer state leadership in technology to no leadership in technology policy. And, if you will, the third best world, or maybe you say the worst world, is there is no leadership at all, and technology companies have no guardrails when it comes to protecting consumer privacy or how they manage data or how they use artificial intelligence.”

The best way to build trust and operate in a way that consumers can believe in requires the federal government to provide policy leadership, Weiser said.

In 2024, the Colorado legislature passed a first-of-its-kind bill that sponsors said would protect consumers from “bias” in artificial intelligence development. Several organizations representing the technology industry had urged Polis to veto the bill, arguing it would harm small businesses developing artificial intelligence technology.

At its core, Senate Bill 205 establishes regulations governing the development and use of artificial intelligence in Colorado and focuses on combatting “algorithmic discrimination.” It defines “algorithmic discrimination” to mean any condition in which AI increases the risk of “unlawful differential treatment” that then “disfavors” an individual or group of people on the basis of age, color, disability, ethnicity,  genetic information, race, religion, veteran status, English proficiency and other classes protected by state laws.

In May, Gov. Jared Polis reluctantly signed the measure into law, which is slated to go into effect in February 2026. In the meantime, Weiser’s office is tasked with implementing the law by creating audit policies and identifying high-risk AI practices.

While Polis signed the law, he and Weiser vowed to revise it before it actually takes effect.

In a statement to Colorado Politics, the governor’s office said, “Governor Polis believes this legislation was the beginning of a conversation around AI and looks forward to continuing to discuss this issue with legislators and stakeholders and ensure the final product supports innovation before the 2026 implementation date. Governor Polis is a former tech entrepreneur and supports technological advancements like AI that can support consumers, reduce bias, and help drive Colorado’s economy.”

In June, Weiser, Polis, and Democratic Senate Majority Leader Robert Rodriguez, the bill’s sponsor, signed a letter promising several steps before implementing the law in 2026. This includes creating a task force to revise the new law in the upcoming 2025 session to minimize unintended consequences. The goal, they said, is to “provide for a balanced regulatory scheme that prevents discrimination while supporting innovation in technology.”

The letter also identified the areas the task force will tackle, including the following:

• Refining the definition of artificial intelligence systems to the most high-risk systems

• Focusing on the developers of these high-risk systems, rather than on small companies that are deploying the technology

• Shifting from a proactive disclosure regime to traditional enforcement

• Making clear that consumer right of appeal refers to the ability for consumers to appeal to the attorney general.

• Considering other measures the state can take to become the most welcoming environment for technological innovation

Alvin McBorrough, founder and managing partner of OGx, a Denver consulting firm that focuses on technology and analytics, applauded Colorado lawmakers for approving Senate Bill 205.

“The ultimate goal is to provide protection for the well-being of the citizens, public interest, and trust,” McBorrough said. “It’s a pretty comprehensive role that has been developed to make sure that developers and deployers of AI technology systems have some level of control around it.”

As a practitioner and advisor in the AI industry, McBorrough said he reviewed the new law and asked himself if it’s better to have some form of regulation right now that can be tweaked — instead of none at all.

“I was a proponent leaning towards going forward and coming up with some kind of framework and put that in place,” he said. “You can always come back and approve upon it, but if there is nothing, there is a free for all, and that is what is happening right now.”

McBorrough said one of the biggest drawbacks in AI is the development of “algorithmic” biases, which can have negative impacts in housing, healthcare and education.

“We’ve never seen one technology so profound and so promising,” McBorrough said. “I think at the end of the day, just from a human perspective, I will say this is going to be another area that will profoundly impact all of us — the way we learn, the way we play, and the way we continue to grow. We have to make sure that whatever we are developing is 100% on the up and up.”

On the other hand, some 200 business leaders, including some of Colorado’s most prominent executives, earlier wrote the governor about their “collective concern” regarding the new law. 

Also earlier, Eli Wood, the founder of software company Black Flag Design, expressed worries the legislation would inadvertently disadvantage small startups, such as his company, that heavily depend on open-source AI systems. Wood said the bill could penalize small businesses for “algorithmic” bias identified in their system, even if the bias originated from the open-source system rather than the one developed by the small business itself.



Alvin

Alvin McBorrough, founder and managing partner of OGx, a Denver consulting firm that focuses on technology and analytics, speaks at a recent AI Tomorrow Workshop in Boulder. 



Weaponizing AI

In chairing a Nov. 19 Senate hearing, Hickenlooper heard testimony about the dangers the public faces if or when AI technology falls into the wrong hands. Hickenlooper is pushing for the passage of several AI-related bills to protect minors and veterans from AI misuse.

During the hearing, Hany Farid, a professor at the University of California Berkeley School of Information, said lawmakers considering a five- or 10-year plan need to realize that without regulations, “everything is going to get worse.”

Farid pointed out that ChatGPT went from zero to one billion users in a year. According to Bloomberg, generative AI is slated to become a $1.3 trillion industry by 2032. In 2022, the AI industry was worth $40 billion.

“Five years is an eternity in this space,” Farid said. “We need to be thinking tomorrow and next year. Here is what we know — hundreds of billions of dollars are being poured into predictive AI and generative AI. The technology is going to get better, it’s going to get cheaper, and it’s going to become more ubiquitous. That means the bad guys are going to continue to weaponize it unless we figure out how to make that unbearable for them.”

By unbearable, Farid said the answer is for lawmakers to hold big tech companies accountable as developers of a technology that, in the hands of “deployers,” can be weaponized for scams and other bad behavior.

Weaponizing AI includes a variety of scams that are starting to cost the general public money, time and dignity. Farid said a restaurant or retailer can tell AI to produce 20 positive reviews to post online. Fake videos and images can ruin someone’s reputation, while senior citizens and veterans can be scammed out of money because the technology is so realistic, he said.

Nude ‘deepfakes’ target children 

Dorota Mani testified that her daughter became a victim of “deepfake” pornography distribution.

Mani testified that her daughter’s high school classmate created “deepfakes” — nude images of her that were circulated around school. Because the school did not have any AI policies — and the state and federal government have no laws against the production of fake nude images — the student who produced them faced very little to no consequence.

“I want to start with saying that our situation is really not unique,” Mani said. “It has been happening and it is happening right now. Last year, when we found out, or we were informed by the school, what has happened to us, the first thing we did, obviously, was we called a lawyer in the school sector. We were informed that nothing can be truly done because there are no school policies and no legislation, and the lawyers repeat exactly the same thing.”

She added: “So, when my daughter heard from the administration that, you know, she should be wearing a victim’s badge and just go for counseling, she came home, and she told me, ‘I want to bring laws to my school so that way my sister, my younger sister, will have a safer digital future’.”

In November, explicit images of nearly 50 female students created controversy inside a Pennsylvania school district. According to reports, the explicit photos were created and posted by a ninth-grade male student and not removed or reported to police for months.

Mani said this is but another case proving schools are unprepared to battle “deepfake” posts, and why guidance from the federal government is imperative.

Mani applauded current efforts to pass the TAKE IT DOWN Act.

Co-sponsored by Hickenlooper, the TAKE IT DOWN Act seeks to protect minors from AI scams. The measure passed the Senate on Dec. 3. If signed into law, the measure will criminalize the publication, without the subject’s consent, of intimate imagery on social media and other online sites.

The bill requires social media companies to develop procedures to remove content upon notification from a victim. For instance, once a platform like Facebook is informed of a deepfake video or image, the company would have 48 hours to take it down.

“The TAKE IT DOWN Act allows the victims to take control over their own image, and I think that is so important,” Mani said. “It gives the freedom to anybody affected to just move on with their life, which sometimes that’s all they want.”

According to a 2019 Sensity report, these types of deepfake pornography, created without the consent or knowledge of the subjects, accounts for 96% of the total deepfake videos posted online.

With AI, scams have become easier, cheaper to deploy

Justin Brookman, the director of technology policy for Consumer Reports said recent research suggests that generative AI can be used to scale “spear phishing,” which is the personalization of phishing messages based on personal data to make them more convincing.

With only a few seconds of a person’s voice and image easily found on social media, Brookman and fellow panelists agreed that believable “deepfakes” are being created, costing consumers billions in losses each year.

The cost of creating “deepfakes” is getting cheaper with technology. Brookman estimated that what once cost spear phishing creators $4.60 per message is now only 12 cents.

Brookman said the federal government can be more effective by beefing up the staffing at the Federal Trade Commission, even if no laws or new regulations are passed.

In his report to the Senate committee, Brookman said scams and fraud are already illegal under a variety of federal and state civil criminal laws. However, the FTC only has 1,292 full-time employees to pursue its competition and consumer protection missions. The number of FTC staff has plateaued for about 14 years, Brookman said, noting that in 1979, the agency had 1,746 employees.

“The FTC is expected to hold giant sophisticated tech giants accountable for their transgressions, but they are severely hamstrung by unjustifiable resource constraints,” Brookman said.

The Associated Press contributed to this story. 

Originally Appeared Here