White House Secures AI Industry’s Safety Commitment on Deepfakes

White House Secures AI Industry’s Safety Commitment on Deepfakes

Key players in the AI sector—Adobe, Anthropic, Cohere, Microsoft, OpenAI, and Common Crawl—have agreed with the U.S. government to take steps against the utilization of AI technologies in creating harmful deepfake content, particularly non-consensual and exploitative imagery. The pledge is directed at stopping the technology’s misuse in generating damaging material.

Escalating Threat of AI in Image-Based Abuse

The administration points to a concerning rise in AI-generated sexual abuse, notably through manipulated images. Deepfakes presents a significant danger to public safety and individual privacy. The White House stresses the necessity for ethical data sourcing and preventive safeguards to combat this issue.

Major tech organizations will responsibly manage their data sourcing processes and implement measures aimed at hindering AI-facilitated sexual abuse. These actions include model evaluations and rigorous testing to avert the generation of injurious content.

The Register reports that Common Crawl is supportive of the initiative’s broader objectives; however, it does not fully endorse all measures, given its focus on web data collection rather than AI model development. Rich Skrenta from Common Crawl explained that their role of amassing web data makes it challenging to specify exclusions for training datasets in accordance with the new commitments.

Nature of the Commitments and Background

This is the second instance within a year where AI companies have voluntarily pledged support to the Biden administration. In July 2023, firms including Anthropic, Microsoft, OpenAI, Amazon, Google, Inflection, and Meta committed to testing AI models, sharing findings, and marking AI-created content to mitigate misuse. Despite these commitments, they remain voluntary and unenforceable.

While the European Union enforces stricter AI regulations, the U.S. continues to rely on voluntary commitments by leading tech firms. The Register has queried the White House regarding potential plans for binding AI policies. Spreading of deepfakes is a concern affecting private citizens and public figures alike, posing risks of misinformation ahead of major elections worldwide.

Efforts Targeting Image-Based Abuse

President Biden and Vice President Harris have prioritized tackling gender-based violence since taking office, emphasizing how technologies like AI amplify risks. The adverse impact of image-based sexual violations is disproportionately felt by women, children, and LGBTQI+ communities. Vice President Harris acknowledged the urgency of this threat, amplified by AI, during discussions in London before the AI Safety Summit.

Since the administration’s call for action in May, various companies have bolstered efforts against image-based sexual violations. Google has enhanced its platforms to address misuse in searches. GitHub has banned the distribution of software tools intended for unauthorized image creation.

Microsoft is working with StopNCII.org to identify and remove repeats of unjust imagery from search results on Bing. Meta has eliminated accounts related to financial sextortion and prohibits the promotion of AI tools for creating abusive imagery. Snap Inc. is refining reporting mechanisms and offering support to affected individuals.

Originally Appeared Here