Despite only becoming mainstream in the past few years, generative artificial intelligence models have come a long way from the early days of blurred edges and extra digits. With the rapid improvement in image quality, it is becoming harder to differentiate between human-made and AI-generated content.
The Shorthorn Editorial Board believes that students need to educate themselves on AI literacy and identification to prevent falling victim to misinformation and potential internet scams.
Misinformation, intentional or not, is inevitable. As technology advances, it gets easier to create images and videos that are difficult to distinguish from reality and may increase the spread of misinformation. With generative AI, almost anyone has the ability to create quick, hyperrealistic visuals for little to no cost.
The recent release of Nano Banana Pro, Google Gemini’s image generation software, caused a stir online due to its heightened image quality and startling realism. This recent update has ironed out many of what used to be the telltale signs of AI image generation, such as illegible text, awkward anatomy and blurred outlines.
Another platform that has raised the alarm for many is xAI’s Grok, a generative AI tool that has been criticized for producing sexualized images of users without their consent. Many have demanded the platform be suspended, but this instance highlights some of the hesitations individuals hold around the widespread, unregulated use of generative AI.
The hesitation surrounding AI is similar on both sides of the political aisle, with approximately 50% of both Democrats and Republicans saying AI makes them feel more concerned than excited, according to a 2025 survey by the Pew Research Center. Despite this, there are still very few restrictions and safeguards in place on AI.
The issues arising with Grok highlight the danger of deepfakes, fabricated images and videos that portray an individual doing or saying something that was not actually done or said. While this may be used for fun, trivial purposes such as generating a selfie with one’s favorite movie character, the same technology can also be responsible for the creation of videos of politicians making inflammatory statements to push a political agenda.
If something appears online that is questionable or triggers a strong emotional response, users should double-check the source. Before sharing or reposting, take a minute to examine the original poster, cross-reference any important facts being shared and use critical thinking.
There have also been recorded instances of generative AI being used in phone scams. Individuals have received calls made using the voices of their loved ones claiming to be in danger or needing large amounts of cash.
Older generations, or those who may be less familiar with the capabilities of AI, are especially susceptible. In 2023, citizens over the age of 60 were scammed out of a collective $3.4 billion, according to an FBI elder fraud report. So, in addition to keeping up to date on AI safety themselves, students should set in place measures to help their loved ones.
One preventative measure to reduce the likelihood of falling victim to these scams is to come up with a safe word or phrase with close friends and family. That way, there is an added level of security to confirm identity.
It is recommended to choose a phrase that is four words or more to offer a greater safety measure that is harder to guess. In addition, when generating a safety phrase, avoid including personal information such as street names, phone numbers, passwords, information available on the internet or other details that could be used by a scammer, according to an article from CBS News.
Artificial intelligence is here to stay, and it is important that students keep themselves informed on the potential dangers it poses and how to keep themselves and their loved ones protected, especially when it comes to online safety and influencing personal opinion.
The Shorthorn Editorial Board is made up of opinion editor Lillian Durand, editor-in-chief Pedro Malkomes; news editor James Ward; associate news editor Taylor Sansom; multimedia editor Samarie Goffney; engagement editor Sairam Marupudi; design editor Haley Walton; news reporter Acadia Clements; and engagement producers Jessica Arnold and Natalie Gomez.






