Researchers call for guardrails for deadbots | Science News

Researchers call for guardrails for deadbots | Science News

An emerging digital afterlife industry is developing deadbots or griefbots allowing mourning users to simulate conversations with their near and dear ones who have passed away. Researchers from Cambridge university have called for safety protocols for such technology to prevent social and psychological harm.

AI technology potentially exposes users to digital hauntings. (Image Credit: Ekaterina Shakharova/Unsplash).

New Delhi: An emerging digital afterlife industry is providing users with AI griefbots or deadbots that allows for text or voice conversations with lost loved ones, allowing for a postmortem presence. Researchers from Cambridge university have identified the need for designing safety protocols for such technologies, to mitigate the risk of psychological harm, and digital ‘hauntings’ by the dead. The researchers have also identified several ways in which the technology could be abused by malicious actors.

AI ethicists from Cambridge have identified three design scenarios for such platforms, with potential negative consequences of careless design in a field of AI considered high-risk. The resulting chatbots can potentially be used by companies to spam surviving friends and family with unsolicited and unwanted notifications, reminders and updates about the services they provide, resulting in being stalked by the dead. The financial motives of digital afterlife services may restrict them from prioritising the dignity of the deceased.

Haunted by deadbots

The researchers argue that those who are initially comforted by the griefbots may get drained by daily interactions that can become an overwhelming emotional weight. If a deceased loved one signed a contract with a digital afterlife service, near and dear ones may be powerless to suspend the AI simulation. The technology may also be a burden on those who are not prepared to process grief through interactions with a griefbot.

Another potential scenario is a conversational AI service that allows people to create a griefbot without consent of the ‘data donor’. It is also possible for a company to use the provided data for purposes other than what the data was provided for, exposing users to manipulation. For example, after an initial premium trial, the chatbot can potentially suggest ordering from a food delivery service in the voice and style of the deceased.

Another potential scenario is an adult child getting emotionally exhausted and wracked with guilt over ignoring messages from a deadbot, but suspending the account would violate the terms of the contract their deceased parent signed with the company that offers the service.

Recommended guardrails

The researchers have called for age restrictions for griefbots, along with meaningful transparency to ensure that users are consistently aware that they are interacting with AI, similar to the warnings for seizures before videogames for example. The digital afterlife services have to consider the rights and consent of not only those that they recreate, but also the users who interact with the simulations, with opt-out protocols that allow users to terminate their relationships with deadbots.

The paper describing the research has been published in Philosophy and Technology. Coauthor of the study, Katarzyna Nowaczyk-Basińska says, “Rapid advancements in generative AI mean that nearly anyone with Internet access and some basic know-how can revive a deceased loved one. This area of AI is an ethical minefield. It’s important to prioritise the dignity of the deceased, and ensure that this isn’t encroached on by financial motives of digital afterlife services, for example. At the same time, a person may leave an AI simulation as a farewell gift for loved ones who are not prepared to process their grief in this manner. The rights of both data donors and those who interact with AI afterlife services should be equally safeguarded.”

Follow us on social media

Originally Appeared Here