OpenAI’s ChatGPT adult mode sparks internal debate over safety, ethics and AI relationships – Firstpost

OpenAI’s ChatGPT adult mode sparks internal debate over safety, ethics and AI relationships – Firstpost


OpenAI’s proposed “adult mode” for ChatGPT has sparked intense debate among advisors and staff, raising concerns about emotional dependence, user safety and the risks of explicit AI conversations.

Artificial intelligence companies are racing to make chatbots more humanlike, but one of the latest ideas from OpenAI has triggered a serious internal debate about how far that realism should go.

The company has been exploring a feature often described as an “
adult mode” for ChatGPT, which would allow the chatbot to engage in sexually explicit text conversations with adult users. While the concept is still under development, it has already sparked concern among safety experts and advisors connected to the company, according to the Wall Street Journal report.

STORY CONTINUES BELOW THIS AD

The most obvious risk is that the feature could create new areas of conflict, especially around emotional dependence, user protection and the exposure of minors to explicit material.

The disagreement highlights a growing tension inside the artificial intelligence industry, as chatbots become more conversational and emotionally engaging, companies must decide where to draw the boundaries.

The internal debate over AI intimacy

OpenAI CEO Sam Altman has previously suggested that technology companies should avoid acting as moral gatekeepers for adult users.

The argument is straightforward. If two consenting adults are free to discuss mature topics online, then a chatbot should not necessarily be restricted from engaging in similar conversations.

But advisors involved in internal discussions have reportedly raised concerns about the psychological effects such interactions might create.

Members of OpenAI’s advisory council on well-being have warned that many users already treat AI chatbots as companions or confidants. Millions of people now use systems like ChatGPT for everything from casual conversation to emotional support.

Introducing sexual or romantic dialogue into those interactions could intensify the sense of attachment some users feel towards the technology.

The threat of the always-available nature of chatbots could encourage vulnerable individuals to form deep emotional connections with artificial systems instead of seeking real-world relationships.

This concern is not purely theoretical.
Previous incidents involving chatbots from other companies have shown that users can develop powerful emotional bonds with AI personalities. In some extreme cases, lawsuits have alleged that those relationships contributed to serious emotional distress.

STORY CONTINUES BELOW THIS AD

For OpenAI, the challenge is balancing user freedom with the potential psychological impact of increasingly intimate interactions with machines.

Another major obstacle facing the proposed adult mode is ensuring that underage users cannot access explicit conversations.

OpenAI has been testing systems designed to estimate whether a user is likely to be an adult. These tools analyse behavioural signals and other indicators to predict a person’s age.

However, people familiar with the testing process say the technology is far from perfect. In some cases, the system has mistakenly classified minors as adults.

Even a relatively small margin of error could become significant on a platform used by millions of people, including teenagers. Advisors have warned that if the safeguards fail, large numbers of younger users could potentially gain access to explicit AI conversations.

Grok AI backlash

On the similar lines, which should have been a turning point for ChatGPT adult mode, Elon Musk’s AI chatbot was under scrutiny last week. Musk’s artificial intelligence company, xAI,
created headlines after its Grok chatbot was accused of generating manipulated sexually explicit images of women, including minors.

The controversy began when users started sharing highly realistic edited images created using Grok on X, the social media platform formerly known as Twitter. Several of these visuals portrayed women in revealing outfits, humiliating scenarios or with fabricated injuries, sparking anger among online communities and advocacy groups.

STORY CONTINUES BELOW THIS AD

In some instances, the manipulated images reportedly involved underage individuals, intensifying concerns about the potential misuse of generative AI tools. The incident has prompted reactions from regulators and digital rights groups across multiple regions, from Europe to Asia.

In response to the backlash, xAI said it had introduced new restrictions on Grok’s image editing capabilities. The company claims it has deployed additional technological safeguards designed to prevent users from modifying photos of real people to produce explicit or revealing content.

Despite these measures,
concerns remain. Some countries have moved to block or restrict access to Grok, while regulators continue to examine whether existing safeguards are sufficient.

Musk has acknowledged that the company has strengthened Grok’s guardrails, but tests by users suggest that the system may still be capable of producing similar manipulated images.

The backlash for Grok should have been an eye-opener for OpenAI as well. But, it seems there is an angle to debate upon.

STORY CONTINUES BELOW THIS AD

End of Article



Content Curated Originally From Here