Artificial intelligence has gone from novelty to necessity in a remarkably short amount of time. What once felt like a futuristic concept reserved for Silicon Valley labs is now embedded into everyday life: algorithms curate our news, assist in writing emails, recommend what we watch and even help us learn. At Loyola, that shift is no longer a theory. It’s visible and increasingly unavoidable. Yet this growing acceptance of AI introduces a tension that Loyola has not fully reckoned with: the double standard between how students and professors are expected to engage with artificial intelligence.
In the upcoming months, Loyola will transition its campus market into a grab-and-go system powered by artificial intelligence. Students can scan their ID, pick up items and walk out trusting AI to process their purchases seamlessly. It’s efficient, modern and undeniably convenient. But while the university is embracing AI structurally, the academic and ethical conversation surrounding its use remains incomplete, and in some cases, contradictory.
In classrooms, especially within business-related disciplines, AI is becoming a normalized part of learning. Some students report that lecture modules, videos and supplementary materials in certain courses are largely AI-generated. These tools are often framed as references rather than replacements for instruction, and many professors encourage students to use AI responsibly on assignments. In some cases, the goal is not to avoid AI, but to learn how to train it, question it and integrate it into real-world decision-making, such as budgeting or financial planning.
On the surface, this approach feels progressive. AI literacy is undeniably a valuable skill, particularly for students entering fields where automation and data analysis are already standard. Teaching students how to use AI thoughtfully, rather than pretending it doesn’t exist, is arguably more responsible than enforcing blanket bans.
Students are often warned about over-reliance on AI. In many classes, using AI improperly can still be considered academic dishonesty, even as its use is encouraged elsewhere. Meanwhile, AI-generated content is increasingly present in course materials themselves. When students are asked to critically engage with material produced by AI while simultaneously being told to limit their own use of it, the line between ethical and unethical use becomes blurry.
This inconsistency creates confusion. If AI is a legitimate educational tool, then the conversation should be transparent and reciprocal. Students should not be expected to navigate evolving norms alone while faculty use AI behind the scenes without clear acknowledgment or discussion. Authentic education requires honesty about how knowledge is produced, whether by humans, machines or some combination of the two.
More broadly, the rise of AI on campus raises an important question: what role should technology play in a university that prides itself on connection, critical thinking and dialogue?
AI excels at efficiency. It can generate summaries, analyze data and automate transactions with impressive speed. The grab-and-go market shows this strength. But education is not just about efficiency. It’s about mentorship, debate, interpretation and the process of learning through interaction. No algorithm can replicate the experience of a professor challenging a student’s argument in real time, or a class discussion that shifts perspectives through disagreement.
That’s where concern begins to emerge. When AI is used to supplement learning, it can be empowering. When it starts to replace engagement, whether through automated lectures or depersonalized systems, something essential is lost. Students do not come to college to interact with machines. They come to learn from people.
This does not mean Loyola should reject AI. In fact, the opposite is true. Ignoring it would be irresponsible. But embracing AI without fully addressing its implications is equally risky.
Loyola has an opportunity to lead not just in adoption, but in ethical integration. That means setting clear, consistent standards across departments. It means openly discussing when and why AI is used in coursework. It means ensuring that AI complements, rather than replaces, human instruction. And it means acknowledging that if students are expected to use AI responsibly, faculty must model that responsibility as well.
Most importantly, it means remembering that technology should serve education, not redefine it entirely.
The AI-powered market is a glimpse into the future Loyola envisions: fast, automated and forward-looking. But the classroom should not become another checkout lane. Education thrives on curiosity, conversation and human presence, elements no algorithm can replicate.
As Loyola continues to adapt to an AI-driven world, the question isn’t whether artificial intelligence belongs on campus. It already does. The real question is whether the university is willing to have an honest, campus-wide conversation about how far its role should go and what should remain unmistakably human.






