Ole Miss Partners with Texas A&M, Virginia Tech for Novel AI Study

Ole Miss Partners with Texas A&M, Virginia Tech for Novel AI Study

OXFORD, Miss. – Colleges and universities across the globe are grappling with artificial intelligence without a consensus on how best to approach one of the fastest-growing technologies of the era.

Researchers from three leading research institutions across the South want to change that.

The University of Mississippi is leading a new study to create the first-ever baseline model of AI ethics education. Other universities involved in the research are Texas A&M University and Virginia Tech.

“The problem we’re trying to solve is that AI is very new in terms of the public knowledge awareness of it and the explosion of tools and capacities out there,” said Deborah Mower, director of The Center for Practical Ethics at Ole Miss and lead investigator on the study.

Deborah Mower

“Universities and colleges nationwide are scrambling to figure out how to teach students to be good, competent and ethical users of AI.”

The National Science Foundation awarded Mower; Glen Miller, instructional professor at Texas A&M; and Qin Zhu, associate professor of engineering education and the director of Virginia Tech’s Laboratory and Network for the Cultural Studies of Engineering and Technology, nearly $400,000 for their three-year study.

The grant is the first NSF’s Ethics and Responsible Research Program to come to a Mississippi institution.

Mower will collaborate with the Mississippi Artificial Intelligence Network to include 15 community colleges and universities in the study. Miller will expand the research through the Texas A&M system’s 10 schools across the state. Zhu will extend collaborations between his center and other regional researchers.

Together, the researchers hope to gain a better idea of how higher education has responded to AI.

“We’re going to involve these institutions in the entirety of the research grant – from designing the surveys to disseminating information to their networks and to helping them understand what we’ve learned,” Miller said. “That’s a novel approach to trying to integrate underrepresented institutions in a meaningful way.”

Artificial intelligence poses many challenges to higher education, and every institution has responded differently, Zhu said. Even within schools, different departments may have separate approaches to AI ethics education.

“One of the things we want to do is see if there is a typology among all these universities that we can use to describe the issues we are interested in studying,” he said. “This is an opportunity for computer scientists and social scientists to collaborate.

ucimg-3302-4.jpg
Qin Zhu

“This interdisciplinary collaboration is more urgent than other problems we’ve experienced before.”

The uses of artificial intelligence are as varied as the people who use it. Examples range from writing essays to solving complex engineering problems to online shopping and advertising. The potential ethical dilemmas of AI are just as vast.

Several variations of artificial intelligence make inferences or judgments based on a person’s race or dialect. A host of concerns surround intellectual property rights and how they may apply to an age when AI can replicate your favorite artist singing a song they never sang.

AI also poses ongoing concerns about user privacy, and many have raised the issue of transparency in an AI system’s inner workings.

But these are only a handful of the issues at play, Miller said.

“I had a colleague come to me and say, ‘We actually have no idea what we should be doing right now with our students,'” he said. “Because we don’t want to put them out into the world where their skills depend on AI that may not be legal in three-to-five years.

“What kind of new skills do we need if we have AI? What kind of fallback systems will we need if we don’t have it?”

These are the kinds of questions that the researchers hope their results will answer, Mower said.

“That’s why we’re doing this study,” she said. “The goal is for the participants to develop recommendations and guidance, and to give (colleges and universities) access to this data.

ucimg-3302-3.jpg
Glen Miller

“Higher education is struggling to figure out how to respond, and what we will learn from this study can help establish baseline education standards for AI ethics as well as to build networks for shared resources.

“We don’t want colleges and universities to feel like they have to go it alone and reinvent the wheel.”

Outside of providing a national baseline for AI ethics education, the study will also generate a deeper understanding of the issues in the participating states.

“This is building expertise in the state of Mississippi,” Miller said. “When the grant is done, the expertise stays where the faculty are. From the perspective of Mississippi, you’re getting faculty expertise and faculty who will be able to guide instructors to make good decisions because the expertise is in our universities.

“Your students in the next 5, 10, 15 years will understand the ethical issues they encounter and come up with a good path forward because of the expertise that will be generated by this grant.”

This work is based on material supported by the National Science Foundation award nos. 2418866, 2418867 and 2418868.

Top: Deborah Mower (center), director of The Center for Practical Ethics at UM, leads a team that includes Qin Zhu (center), associate professor of engineering education at Virginia Tech, and Glen Miller, instructional professor at Texas A&M, in a new study to create a baseline model of AI ethics education. The three-year project is supported by nearly $400,000 from the National Science Foundation.

Originally Appeared Here