OPINION: Drawing a line: Thinking about using AI ethically | Opinion

OPINION: Drawing a line: Thinking about using AI ethically | Opinion


It’s no exaggeration to say that artificial intelligence has taken over the college learning experience. Campuses across the world have become intense hotspots for AI use, and as ChatGPT marketing has shown, this is by no accident.

Even though AI is still in its early stages of development, its use has become widespread. In a 2024 survey across 16 countries, 86% of students said they used AI in some way in their studies. A 2025 faculty survey showed a surprisingly high usage rate, with 61% of faculty reporting using AI in teaching, though 88% of them reported doing so minimally. For both groups though, AI use is rising rapidly.

The consensus I’ve heard in my experience in the classroom is that AI is the next big stride in technology and that its integration into education is all but inevitable. While this definitely seems to be the case, it is important to remember that AI is not without its limitations, whether it be in its current form or even as it becomes more developed and sophisticated. For example, biases such as racial, gender and cultural stereotyping are an inherent issue in generative AI models. This, along with the negative environmental impacts, the income inequality and job displacement that AI promises, makes it clear that the tool is not without flaws.

Of course, students are aware of this. In the same 2024 survey, 60% of students reported worrying about fairness of AI evaluation, 61% worried about their privacy and data security and 32% worried about bias and fairness in AI responses. All the while, half of all students surveyed reported not feeling AI ready. When combined with the explosively high usage rate, the cognitive dissonance becomes clear. 

In light of all this, the question of the role of AI in learning has become incredibly prevalent. With its integration inevitable, where should students and faculty draw the line with AI use? 

Just last month, the University System of Georgia attempted to answer this question with their Student Guide to Generative AI Literacy that instructed students on best practices regarding generative AI and how to ensure its ethical use. Crucially, the guide constantly and intentionally referred to GenAI as a tool, cautioning students to treat it as an assistant or mentor rather than allowing it to do their thinking for them.

This is the best line to draw in the context of education. As students, we have all felt the desire to just finish an assignment or wished to have an answer without having to do all the work. In the short-term, using AI to accomplish this may feel like the best solution, but it is a dangerous habit to form. The potential for incorrect responses aside, an overreliance on AI could be detrimental for intellectual growth, creativity and the development of problem-solving skills. 

For these reasons, it is necessary to have an intentional line drawn with AI. For me, I find that understanding AI as a means, not as an end to my goals, is the best way to think about it. This understanding keeps AI as just a tool in my mind, able to help but not to fully relieve me of the responsibility of learning. 

Much like many other responsibilities, the process of learning is not always pleasant, but it is a necessary hardship. While AI certainly can help with many of the more boring parts of learning, whether it be by summarizing readings or developing talking points for class discussions, these uses come at a cost.

Ultimately, there is something unique about human learning that generative AI simply cannot recreate. Protecting that uniqueness by understanding AI as nothing more than a means to an end is a valuable line to draw, and one we should all consider as students learning in the era of AI.  

The Red & Black is a 501c3 nonprofit.
Please consider a one-time gift or become a monthly supporter. Cancel anytime.



Content Curated Originally From Here