AI ‘Deepfakes’: A Disturbing Trend in School Cyberbullying

AI ‘Deepfakes’: A Disturbing Trend in School Cyberbullying


A 2024 survey done by Education Week, however, revealed 71 percent of teachers did not receive professional development related to AI. 

In 2024, NEA produced a task force report which studied current and future AI use in classrooms and developed recommendations and guidelines for teachers using AI.  One principle the task force identified was the ethical development and use of AI. This includes ongoing learning opportunities for educators to identify ethical AI dilemmas and how to effectively handle these dilemmas when they arise.

‘Accountability and Inclusion Combined’ 

Pfefferkorn says implementing restorative justice practices, whichan effective way for schools to enforce accountability. For these instances, that could mean taking the affected student’s feelings into account, listening to what they want or the perpetrator and victim having a conversation. 

“There may be restorative justice principles that may be more grounded in what the student wants to have happen rather than everything just being put into the hands of the police,” she says. 

One resource schools could implement is an anonymous tip line, so students can feel safe reporting an incident involving them or a friend. Pfefferkorn also says it’s important to center the victim, and to prevent it from happening. 

“I think this needs to be built into conversations in the classroom about consent and trust and people’s privacy and dignity and agency,” she explains. 

And when a student is a victim of a deepfake, Laura Tierney believes it’s important to ensure they are not isolated.  

“As educators, continuing to foster a school that promotes inclusion after a deepfake incident—and making sure that you have accountability and inclusion combined—is so important” 

Students are often afraid to tell parents or caregivers about incidents on social media, Tierney says, including those about deepfakes. For those affected by deepfakes, Tierney what she calls the S.H.I.E.L.D. approach.  

“Stop, take a moment, pause and avoid reacting impulsively and maybe engaging with the person who immediately sent that,” she says. “The second is to huddle and reach out to a trusted adult.” 

Tierney then recommends informing the platform it was found on and trying to get it taken down, and collecting evidence, such as screenshots. Then, limit the poster by blocking them. The final step is to tell people the photo was fake. 

“When … a deep fake is shared, it’s so easy for us to feel isolated and that there’s nothing we could do, when there’s actually so many positive moves a student can be making to make sure that they take care of themselves and their mental well-being,” she says. 

Tierney advocates for including the situation in health classes to get students thinking about what they could do if they found a deepfake of themself or a friend. 

“Education, I think, beats reacting any day, all day, and so that could be education about how to spot deep fakes and what to do, education about when AI is fair game versus not fair game.”



Content Curated Originally From Here