AI potential outweighs deepfake risks only with effective governance: UN

AI potential outweighs deepfake risks only with effective governance: UN

September 15 marks the United Nations’ International Day of Democracy, and this year Secretary-General António Guterres annual message focuses on AI.

“AI must serve humanity equitably and safely,” Guterres says. “Left unchecked, the dangers posed by artificial intelligence could have serious implications for democracy, peace and stability.  Yet, AI has the potential to promote and enhance full and active public participation, equality, security and human development. To seize these opportunities, it is critical to ensure effective governance of AI at all levels, including internationally.”

It’s raining bills in California ahead of legislative deadline

Showing itself to be an enthusiastic believer in that regulatory edict, the government of California has approved a bevy of proposals to regulate AI ahead of a looming legislative deadline. AP reports that the state legislature is voting on hundreds of bills to send on to Governor Gavin Newsom before the legislative session closes on Saturday.

The rush to legislate includes proposals to ban deepfakes related to elections, and require large social media platforms to remove material identified as deceptive 120 days before election days and for 60 days thereafter. Ads making use of AI to manipulate content must be labeled. And  two proposals would make it illegal to use AI tools to create child sexual abuse material, strengthening current laws that disallow district attorneys from prosecuting people who possess or distribute AI-generated child pornography unless they can prove the materials depict a real person.

Whether or not Newsom is as eager as his legislature to push the laws forward remains to be seen. He has until September 30 to either sign them or veto them. He can also let them become law without a signature. But the governor has spoken out against overregulation, both in its hard and soft costs.

Laws aim to codify rules to prevent AI cloning

The flurry of laws also concern worker protections – which in Hollywood means protecting actors and voice actors from being replaced with deepfake AI clones. Per AP, the measure mirrors language in the deal SAG-AFTRA made with movie studios last December.

The state is also to consider imposing penalties on those who clone the dead without obtaining consent from the deceased’s estate – a bizarre but very real concern, as late celebrities begin popping up in studio films.

A further proposal aims to ban call centers from replacing their hunan staff with AI workers.

In addition to imposing prohibitions and penalties, California lawmakers also want to teach people more about AI. One proposal aims to force developers to start disclosing what data they use to train their algorithmic models. Another aims to interrogate models for potential bias and limit government contracts to those models that meet certain safety protocols. Yet more want AI guidelines for schools and AI skills incorporated into math, science and history curriculum.

Massive deepfake porn ring on Telegram uncovered

AI may ultimately find a place in schools, but news out of Korea illustrates the potential dangers in  making AI freely available without regulation in place. The BBC reports on a scoop by reporter Ko Narin, who found out that Korean students were being blackmailed with sexually explicit deepfake images. Police have attached deepfake porn rings to messaging accounts on Telegram, where users traded deepfake images of students as young as middle school.

To date, more than 500 schools and universities have been identified as targets. Many of the victims and perpetrators are suspected to be under 16, South Korea’s age of consent.

The Seoul National Police Agency has now announced an investigation into Telegram over its role in enabling the distribution of fake pornographic images of children.

There’s always liveness detection, says 1Kosmos COO

If you find yourself suffering from deepfake despair, Siddharth Gandhi is here to remind you that there are remedies. Writing in ET Edge, the COO of 1Kosmos for Asia Pacific says strong security is possible by pairing liveness detection with device-based algorithmic systems that can detect injection attacks in real-time.

He also points to positioned live ID systems as “another game-changer” in the industry. “Positioned live ID systems are user-friendly and require minimal interaction from the user. The liveness check and verification processes are quick and seamless, making the login process much smoother.”

Echoing many in the digital identity verification sector, Gandhi notes that “securing biometric authentication is not a one-time fix, it’s a continuous journey that requires a proactive approach from businesses, researchers, and security professionals. Future security systems should be dynamic, adapting to new challenges posed by evolving technologies.”

Algorithms trained on affluent white people

Such dynamism sounds great, as long as you have the resources for it. An article in Wired explores a potential hole in António Guterres’ plan for AI that can “serve humanity equitably and safely” – namely, the lack of diversity in training sets for deepfake detection tools, and the general lack of resources in the global south.

Reporter Vittoria Elliott writes that “most tools currently on the market can only offer between an 85 and 90 percent confidence rate when it comes to determining whether something was made with AI.” When content comes from non-white, non-English-speaking countries, “that confidence level plummets.”

In many of these countries, the quality of technology available to the masses is limited, meaning poor quality images are more common; and many records are still in hard copy, meaning they can’t be fed into training sets. Photos taken on cheap Chinese smartphones are not liable to play well with deepfake detection models trained on high-resolution reference data.

“Without the vast amounts of data needed to train AI models well enough to accurately detect AI-generated or AI-manipulated content,” Elliott writes, “models will often return false positives, flagging real content as AI generated, or false negatives, identifying AI-generated content as real.”

ID R&D runs through current deepfake typology

In the meantime, deepfakes themselves are getting more diverse. On LinkedIn, Konstantin Simonchik, chief scientist of ID R&D, has posted a video demonstrating different varieties of deepfakes. Face swapping using GANs, full-face synthesis using generative models, lip sync deepfakes that create the illusion of false speech, and prompt-driven image animation are among the new tools fraudsters are utilizing to commit deepfake fraud.

“Deepfakes are diverse and ever-evolving,” Simonchik says. “Deepfake detection work is more important than ever.”

Article Topics

1Kosmos  |  biometric liveness detection  |  biometric-bias  |  California  |  deepfake detection  |  deepfakes  |  ID R&D  |  South Korea

Latest Biometrics News

 

Sep 4, 2024, 7:56 pm EDT

The U.S. authorities continue to push for access to EU member states’ biometric databases to conduct traveler screening as part…

 

Sep 4, 2024, 7:47 pm EDT

UK digital identity verification firm Yoti has posted suggestions for the Digital Information and Smart Data Bill (DISD) and for…

 

Sep 4, 2024, 7:40 pm EDT

The deployment of body-worn cameras (BWCs) for Police Scotland officers is facing delays, with the anticipated rollout now postponed until…

 

Sep 4, 2024, 6:41 pm EDT

Azerbaijan is planning to have 1 million citizens using digital identities by the end of 2026, the country’s Ministry of…

 

Sep 4, 2024, 2:14 pm EDT

The benefits of digital public infrastructure (DPI) can only be realized if the government is committed to a high-quality implementation…

 

Sep 4, 2024, 1:53 pm EDT

UK retail crime levels have been surging in recent years with lawmakers launching an inquiry on organized shoplifting gangs, including…

Originally Appeared Here