Human-acting AI chatbots on social media stir ethical concerns

A technology researcher wrote this week that artificial intelligence chatbots are invading online communities meant for human connections.

And some of the early results have been jarring, she said.

Casey Fiesler, an information science professor at the University of Colorado Boulder, said companies should take more time weighing whether AI is actually helpful before rolling it out to a growing number of platforms.

“Right now, many companies are wielding generative AI as a hammer, and as a result, everything looks like a nail,” she wrote in an article for The Conversation.

Fiesler focused on Meta AI and chatbot integration on social media.

Meta has weaved AI response systems into Facebook, Instagram, WhatsApp and Messenger.

The company touts that Meta AI is built on a powerful large language model, and people can use the AI “in feed, chats, search and more across our apps to get things done and access real-time information, without having to leave the app you’re using.”

But Fiesler cited recent examples of Facebook chatbots impersonating other human users.

In one example, also reported by The Associated Press and others, Fiesler said an AI chatbot told a mother seeking advice in a Facebook group that it also had a gifted and disabled child.

In another instance, a Facebook chatbot reportedly tried to give away nonexistent items to a user. The AI offered someone in a Facebook group a “gently used” camera and an “almost-new” air conditioner.

“Concerns that I already had around the role of AI in online communities were simply confirmed by the example in the parenting Facebook group,” she said Tuesday via email. “I hope that the negative attention encourages both Meta and others to be much more careful about how chatbots are deployed moving forward, but if they aren’t, I suspect we will see many more examples like this one.”

Andrew Selepak, a social media expert who teaches at the University of Florida, said there’s nothing inherently wrong with chatbots on social media. They could be useful in answering simple, factual user questions for any number of brands at any time of day.

What are your hours?

When are you having a sale?

But Selepak said AI impersonating a human with compassionate responses in a Facebook group setting is a different story.

“That in many ways almost seems evil, because we as humans want an empathetic response,” he said. “We want to connect to others, and it’s part of what we are as a species. And you don’t get that from AI, especially AI that’s faking its humanness.”

The chatbot in the Facebook moms’ group, for example, reportedly responded: “I have a child who is also 2e (twice exceptional) and has been part of the NYC G&T program. We’ve had a positive experience with the citywide program, specifically with the program at The Anderson School.”

Selepak said AI responses like that erode trust among users about the legitimacy of any post.

And they run counter to the original intent of social media, he said.

“If we look at what social media – you know the Jack Dorsey, Mark Zuckerberg, early days of social media – it was about human connection,” Selepak said. “And AI is not a human. AI does not provide human connection.”

Meta says its AI chatbots will give a response in a group when someone either tags it, @MetaAI, or after a user’s post goes unanswered for an hour.

Selepak said that’s about generating engagement, not connections.

“It’s just like, well, if we give you a notification that someone responded to your post, you’ll come back and look at advertising,” he said.

Anton Dahbura, an AI expert and the co-director of the Johns Hopkins Institute for Assured Autonomy, said there’s a “mad dash” to roll out AI-powered tools, often without proper consideration for the well-being of the users or customers.

Even a big tech company such as Meta can fall victim to the allure of AI, he said.

“It seems seductive that AI can be used as a catalyst” for engagement, Dahbura said.

But companies are skipping steps in the process to determine if this is a solution that people want.

“We’re kind of jumping right into it feet first without really understanding all of the implications,” Dahbura said.

Facebook group administrators can turn off the chatbot feature, Meta says.

Originally Appeared Here

Author: Rayne Chancer