What Teachers and Principals Need to Know About ‘Deepfakes’

What Teachers and Principals Need to Know About ‘Deepfakes’

“Deepfakes”—artificial intelligence-manipulated video, audio, or images created using someone’s voice or likeness without their permission—are the dark frontier of AI, and a top concern for school districts confronting the expansion of free, easy-to-use AI tools.

There have been reported instances of students using AI tools to generate fake, pornographic images of their classmates and fake videos of their teachers or principals. And even staff members have allegedly generated fake audio clips of principals or other colleagues.

Most districts are ill-equipped to handle these incidents. AI technologies are evolving very fast and most educators still haven’t had any training on the potential harms and benefits of the technology.

In a panel discussion during a Sept. 19 Education Week K-12 Essentials Forum, two experts—Andrew Buher, the founder and managing director for Opportunity Labs, a nonprofit research, policy, and consulting lab, and Jim Siegl, a senior technologist with the Future of Privacy Forum—discussed what schools need to know about responding to and preventing deepfakes.

Here are their insights and advice.

What are the harms of ‘deepfakes’ for students and educators?

“I don’t think we fully know yet” what the harms of deepfakes are, Buher said. But with the incidents in the past year, schools are beginning to get a better picture.

To begin with, deepfakes could affect student and staff mental well-being, as well as their reputations and employability, Buher said.

What role do schools play in preventing or curbing this behavior?

Schools have an obligation to create a safe learning environment, Siegl said. That includes protecting and disciplining students and addressing significant disruptions to the learning environment. For example, Siegl said a deepfake sexually explicit image of a student circulated online among peers disrupts the learning environment—as a consequence, a school would need to address that behavior.

The two panelists pointed out that even though the ability to create deepfakes is a relatively new technological development, schools in many cases already have other policies and procedures in place they can leverage to respond to these incidents. For instance, schools can use their student code of conduct policies or procedures around cyberbullying and harassment to address deepfake issues.

Buher said these incidents are also teachable moments for kids. They can be used to help students and staff understand what deepfakes are and what their impact is on people’s lives. These teachable moments could be part of a broader media literacy initiative, the panelists said.

Do state and federal laws address this kind of tech use?

Congress is considering a few bills related to regulating deepfakes, according to the panelists, but so far, none have passed.

To deal with current incidents, schools can already leverage some federal laws in place. The most recent Title IX regulations, for instance, specifically call out deepfakes as an example of sexual harassment, and the Family Educational Rights and Privacy Act, or FERPA, dictates what schools can share with law enforcement when determining how to handle a deepfake incident that disrupts the learning environment.

A few states—such as California, Illinois, and Washington— recently passed laws related to deepfakes, Siegl said. Buher emphasized that educators need support navigating how those laws apply to K-12 schools and what schools are empowered to do legally regarding behavior that occurs online and often outside of school hours.

What are some ways to detect deepfakes?

Some technical solutions are being developed to detect deepfakes, but the panelists said they wouldn’t recommend schools spend money on them because their quality and efficacy are unproven.

Instead, schools should focus on educating their staff and students about this topic and ensuring that they have the resources and the skills they need to understand the challenges around deepfakes and other AI-generated media, the panelists said.

“There isn’t going to be an easy button,” Siegl said. “Focus on the people. Focus on the process.”

Originally Appeared Here