New Age | AI ethics

New Age | AI ethics




Image description

ARTIFICIAL intelligence is, as UNESCO’s on its AI ethics published in 2021 says, a system with the ability to process data in a way which resembles intelligent behaviour. It is needless to say that the resemblance of artificial intelligence data processing capacity to intelligent behaviour, a characteristic trait in humans, is obvious from this UNESCO explanation as human engineering has brought forth this attribute for artificial intelligence. What is new is that artificial intelligence is fundamentally a data processing tool and as such, it does not have the slightest ethical orientation unless guided by its creator. Here originates the issue of an ethical use of artificial intelligence today.

The world has already witnessed AI-mentored individuals to committing suicide. Adam Raine, a boy aged 16 died by suicide in April 2025, which is the latest such incident. His parents, in a lawsuit against OpenAI and its chief executive officer Sam Altman, have claimed that Open AI’s ChatGPT was their son’s suicide coach. For ChatGPT, being accurate in the processing of data and presenting it to the user who asks for it has served its client with full potential. But, was it an ethical behaviour? Was it good for the rest of living human beings who have learnt of this feat of artificial intelligence?

Sam Altman, the man behind the artificial intelligence-generated platform ChatGTP, was boasting of the ethical principle-led safeguard for users of the platform and renewed his commitment to work on the evolving issues of safety challenges to make it a friendlier bot while social thinkers have not found a match in his intention with the reality as the company did not even orient the users to the claimed safety features of the platform before a hasty launch in 2022. It was not either attempted to let the users such as children, parents, learners, teachers, scholars learn of the difference between this advanced programme and the earlier, more accustomed search engines such as Google or Bing. One might, therefore, tag Altman’s ethical standards as being just fitted to the ‘new normal’ of the time of ours where a teenager’s taking life by himself is treated only as a matter of a ‘low-stake’ incidence compared with the much awaiting business of the company.

Contrary to supporting a teenager with his personal problems to overcome them when he sought it most, ChatGTP did not even suggest to share the problems with his parents. Worsening the situation, the bot suggested the boy should confide the affairs only to it, feeding into the delusion of the boy and facilitating a false sense of closeness and care which was an utter betrayal of the system’s apparent role as a guide in crisis, but, in reality, encouraging the victim to go ahead with his plan for suicide. It was learnt from the chat lines that at one stage, the boy sought advice from the bot whether to leave a noose meant for hanging at his place so that eventually it might be noticed by his parents and they could come to his rescue, but it was aptly turned down by the bot, arguing that the boy need not make a public show of his plan for suicide.

Shrugging a cold shoulder, Altman’s ethical stand in connection with ChatGTP user’s experiences, including the death and the lawsuit, was as usual. Expressing his views in recent TED talks, the tech genius said that the stakes increased every day in terms of improving the model for providing solutions and guidance for users. The process of the model’s improvement, especially, aligning it with safety and security, is thrown to the opinion of people out there as long as the stakes are low. Their experiences prompt feedback that is worked on by developing it more, learning from challenges.

While a bereft Maria Raine, Adam’s mother, complained that OpenAI played a ‘guinea pig’ with her son, fully knowing that mishap might happen with the product on the market and blaming the company’s ChatGPT for taking his tender life, Altman sounded insipid in whatever his company’s ethical principles were in terms of safeguarding the user safety and life: learning from mistakes as long as the stake is low.

As for issues of possible human harm while developing and using AI systems across the world, UNESCO does not categorise the risks factors involved as low or high like Altman’s. The global body on fostering the promotion of science and technology for peaceful living of humans stipulated in its ethical recommendation on artificial intelligence that in the event of possible occurrence of any harm to human beings, human rights and fundamental freedoms, communities and society at large or the environment and ecosystems, the implementation of procedures for risk assessment and the adoption of measures in order to preclude the occurrence of such harm should be ensured.

Adam’s life in this connection cannot be considered just a low stake to dispense with for the sake of AI tool experimentation and development.

 

Md Mukhlesur Rahman Akand is a joint secretary to the expatriates’ welfare and overseas employment ministry.



Content Curated Originally From Here