Photos of Australian children used illicitly to train AI tools, HRW reports – JURIST

Personal photos of Australian children are being misused to power artificial intelligence (AI) tools, Human Rights Watch (HRW) reported Tuesday.

According to the report, these pictures are obtained from the web without the children’s or their families’ knowledge or consent and compiled in a data set that companies use to train their AI tools. HRW, an international NGO, has previously reported on similar findings and expressed concerns about such AI tools being used to manipulate children’s likenesses to produce harmful deepfakes.

HRW’s analysis found links to photos of identifiable Australian children in LAION-5B, a data set used to train major generative AI models. Personal information such as the children’s names, ages, and locations are noted in the captions of some photos, making them easily identifiable and traceable. The analysis “found 190 photos of children from all of Australia’s states and territories,” which “span the entirety of childhood.” The report emphasises that many of the reviewed images had a “measure of privacy” and were viewed by a limited number of people as they were not possible to find through an online search and were not posted on publicly accessible versions of blogs or websites.

HRW reviewed fewer than 0.0001 percent of LAION-5B, a large-scale data set containing 5.85 billion images and text. The German non-profit organisation LAION, responsible for managing the data set, confirmed the findings of HRW’s analysis on June 1 and has pledged to remove the personal photos. HRW expresses concern that AI models, capable of reproducing identical copies of the material they were trained on, have previously resulted in the leakage of sensitive data and private information, which companies have not been able to prevent effectively. HRW alleges this, coupled with current AI models’ incapability of forgetting data they were trained on, creates privacy risks that lead to exploitation and further harm.

HRW emphasises that “malicious actors have used LAION-trained AI tools to generate explicit imagery of children,” using both “innocuous photos” and explicit child sexual abuse imagery. The report further notes this “substantially amplifies” the risk of deepfakes, where the children’s likeness is digitally manipulated through AI to produce realistic images or videos that do not exist.

Australia’s recent introduction of the Criminal Code Amendment (Deepfake Sexual Material) Bill criminalised the creation and sharing of non-consensual sexually explicit deepfakes. This bill only covers sexual material featuring adults, with similar material featuring children handled under the Criminal Code as child abuse material.

Ahead of the Australian government introducing reforms to the Privacy Act in August, including the Children’s Online Privacy Code in the act, HRW children’s rights and technology researcher Hye Jung Han says, “The Australian government should urgently adopt laws to protect children’s data from AI-fueled misuse.” HRW further suggests the code should prohibit “scraping children’s personal data into AI systems” and “the non-consensual digital replication or manipulation of children’s likenesses.” The report additionally recommends incorporating processes to “seek meaningful justice” for children who have been harmed.

Originally Appeared Here

Author: Rayne Chancer