This teen is helping kids spot deep fakes in the name of AI literacy | story

This teen is helping kids spot deep fakes in the name of AI literacy | story


MediaSmarts designated March 27 as AI Literacy Day 

As the internet fills up more and more with AI-generated content, it’s never been more difficult to tell what’s fake.

That’s a big part of why today, MediaSmarts is launching its first-ever AI Literacy Day across Canada. 

The organization said it’s meant to address an urgent need for kids to learn the skills to keep them safe in an increasingly AI world. 

“Canadians need the critical thinking skills to question what they see online, recognize deepfakes and understand why algorithms are serving them certain content,” said Kathryn Hill, executive director of MediaSmarts, in a news release. 

The day is meant to encourage schools and communities to get familiar with AI and how it can be used in dangerous ways.

One teen who works with MediaSmarts to help debunk disinformation shared why it’s so important for kids to be AI literate and how they can develop those skills right away. 

Be your own online detective, says teen 

Grade 11 student Lukas Rubenyan from Toronto, Ontario, joined the MediaSmarts Teen Fact-Checking Network a few years ago.

The program trains teens on how to verify the accuracy and authenticity of viral things they see online.  

From there, the fact-checkers like Lukas create videos to share their findings and teach other teens how to do the same. 

“It feels like I’m a detective of sorts, solving online mysteries,” Lukas told CBC Kids News. 

An image depits three screen stills from Lukas's fact checking videos for MediaSmarts.

 Lukas has created a series of videos for MediaSmarts’ Instagram account that explain how to spot fake information online. (Image credit: MediaSmarts/Instagram)

Lukas said that with people now using deepfakes to have politicians and other figures say whatever they want, he felt called upon to help other people his age with AI literacy. 

“To develop your own opinions, you need to be able to tell what’s real from what’s fake, otherwise you’re basing them on what someone else wants you to think,” he said. 

Scott DeJong, an expert in AI literacy from Montreal, Quebec, agreed that it’s never been more important for kids and teens to be AI literate. 

As deepfakes become more seamless, he said there’s a growing concern around AI-generated content being used to manipulate us. 

“The concern is that this content can change how we act and behave,” he said. 

On a small scale, fake content can mislead you into buying a product that you wouldn’t have otherwise, for example.

But on a large scale, deepfakes could be used to change the course of history.

“We see disinformation campaigns that actually focus on reducing people’s desire to go vote because they know certain populations not voting can help another party win,” he said. 

Tips for verifying AI-generated content 

Lukas and Dejong said there are a few skills kids and teens can start developing right away to improve their AI literacy.

The most important thing? Stop and think before you share something.

“If you feel like something is trying to get money out of you or make you feel angry, scared, or trying to get you to do something, take a second, breathe and then investigate further,” said Lukas.

For images you suspect could be fake, Lukas and DeJong recommend that you: 

  • Investigate the account posting it. Do they seem credible? Do they have other AI-generated content? Does their content seem to be trying to make you emotional? 
  • Do a reverse image search. Does the photo exist elsewhere online? If not, that’s a red flag. If it does, what are other sources saying about it, and do they seem credible? 
  • Check to see if there’s a photographer credit. If not, that could be a sign it’s AI-generated.

Images of pinetree-filled landscapes covered in wildfire are labelled as AI-generated. 

These AI-generated images circulated online last summer and claimed to be shots of B.C. wildfires. At the time, the B.C. Wildfire Service called them out as AI-generated, saying they did not accurately reflect the terrain, fire size or behaviour in the area. (Image credit: B.C. Wildfire Service/Facebook)

For videos you suspect could be AI-generated:

  • Pay attention to small details. Is people’s skin texture strangely smooth? Do their hair lengths change? Do background details change throughout the video? 
  • Again, who is posting the video? Often, a quick Google search can let you know if a source is credible or if they have a history of disinformation. 

Finally, with certain types of deepfakes, DeJong said checking other sources is especially important. 

“Let’s say you saw a politician making a statement in a video. You can go to the actual speech where it comes from and see if that was actually said.” 

Have more questions? Want to tell us how we’re doing? Use the “send us feedback” link below. ⬇️⬇️⬇️
 



Content Curated Originally From Here