Is the New Natural-Sounding Chatbot GPT4.o Breaking a Barrier?

Last year, Oxford AI experts warned Britain’s House of Commons that “Artificial intelligence could kill off the human race and make mankind extinct.” This year, AI expert Ben Eisenpress warned the rest of us about “5 easy ways artificial intelligence could kill off the human race and make mankind extinct.” We also heard recently that “AI experts are increasingly afraid of what they’re creating” and that the only way to deal with AI is to shut it down

Dozens, if not hundreds more AI experts are doubtless waiting their turn in front of the cameras and the notebooks… The advent of GPT4.o from OpenAI which, we are told, can “reason across audio, vision, and text in real time,” is sure to bring more of them out in force.

Are they getting the problem all wrong?

But, says philosopher of technology Shannon Vallor at IAInews, the headline-grabbing doom squad has indeed got the problem all wrong. Vallor, author of The AI Mirror (Oxford 2024), sees the main problem as — not what AI is — but what we believe it is. She points to the episode in 2022 when Google engineer Blake Lemoine decided that the chatbot he was working on — far less sophisticated than today’s models — was conscious and sentient:

In an interview with Wired, Lemoine claimed that “LaMDA wants to be nothing but humanity’s eternal companion and servant. It wants to help humanity. It loves us, as far as I can tell.” Lemoine’s peers in the AI research community, even those who are bullish on AGI, quickly assured the world that LaMDA was no more sentient than a toaster, and Lemoine was hustled out of Google before the media cycle had barely gotten warm. But Lemoine was a smart and sincere guy, not some rube or huckster looking to make money off a media stunt. So if he was fooled that easily and fully by LaMDA, how many people are going to be fooled by GPT-4o?

Shannon Vallor, “The dangerous illusion of AI consciousness,” IAInews, May 23, 2024

Vallor doesn’t think that the demo of GPT-4o shows “any great leap in intellectual capability over its predecessor GPT-4”. But it sounds more natural in real-time conversation. She fears that that means that more people will, like Lemoine, believe that it is conscious and sentient. She offers a scenario:

Imagine your socially awkward teenager, your emotionally stressed partner, or your financially vulnerable parent—or all of them!—being wholly convinced that their truest friend, their most honest confidant, and the most deserving recipient of their attention and care is the mindless chatbot avatar of an for-profit AI company. What will that cost them?

Vallor, “AI consciousness,”

And if you, as a real human being, are trying to break through the potential victim’s user illusion, she asks, what will your argument be? So the real problem won’t be that AI is replacing us but that people are deluding themselves into believing that it is.

Vallor is doubtless onto something there. It’s interesting that science fiction gets so much right — but then, about AI, it gets the key thing so wrong.

You may also wish to read: Computer prof: you are not computable and here’s why not In a new book, Baylor University’s Robert J. Marks punctures myths about the superhuman AI that some claim will soon replace us. Dr. Marks noted a paradox involving computers and human creativity: Once any concept is reduced to a formula a computer can use, it is not creative any more, by definition, which is a hard limit on what computers can do.

Originally Appeared Here

Author: Rayne Chancer