Are we in a temporary lull before generative AI delivers on its promise? Or will AI chatbots ultimately end up being useful but not revolutionary tools, more like spellcheckers?
How AI falls short of high expectations
ChatGPT gained 100 million users between its November 2022 launch and January 2023. However, that growth has slowed. ChatGPT says it had 200 million weekly users in August 2024. Meanwhile, Facebook, Instagram, and TikTok count their users in the billions.
The signs aren’t even looking particularly promising for the technology in business settings — inside the companies where generative AI is supposedly going to supercharge productivity and threaten human jobs.
Afraid of missing out on AI-fueled opportunities, CEOs in a wide range of industries have been spending heavily on generative AI hardware, software, and services — totaling an estimated $150 billion this year, according to a Sequoia Capital estimate. But as I wrote in my recent book “Brain Rush,” companies are also terrified of being sued if the technology hallucinates. That is making them hesitant to deploy their AI investments.
Of 200 to 300 generative AI experiments the typical large company is undertaking, usually only about 10 to 15 have led to widespread internal rollouts, and perhaps one or two have led to something released to customers, according to my June interview with Liran Hason, CEO of Aporia, a startup that sells companies a system that detects AI hallucinations.
Fear of AI going wrong was palpable when I participated in a meeting of retail executives in August. For example, in 2023, an Air Canada chatbot incorrectly explained the airline’s bereavement policy to a customer and suggested he would be owed a refund for a flight he didn’t take. Air Canada tried to get out of the deal that its AI cut with the customer, but this February, a Canadian tribunal forced the airline to pay a partial refund, Wired reported. Meanwhile, Google’s AI Overviews at one point was advising people to add glue to their pizza recipe and eat a rock every day to improve their health.
In August, I asked ChatGPT to read through “Brain Rush” and return a story that potential readers would find compelling. Sadly, it replied with a fantastic story that it completely made up. When I told ChatGPT to try again to find a story from the book, it confidently presented me with another bogus tale.
AI adoption is also being slowed by employees’ reticence about the technology. Even if they do not outright fear AI will replace them someday, many workers justifiably fear that they will be expected to use AI to do more with the same or even fewer resources. “We’re not getting additional resources to evaluate AI for its potential benefit,” Bob Huber, chief security officer at cybersecurity provider Tenable, told CNBC. “The resources have to come from elsewhere, whether that’s via reprioritization of people’s time or placing other projects on the back burner.”
Meanwhile, Microsoft is having trouble persuading customers to pay extra for Copilot, a generative AI-powered assistant for Word, Excel, and PowerPoint, because of performance and cost issues, according to The Information. My own experience with Copilot was less than thrilling — I gave the AI assistant a D- for its weak ability to help me write an article.
To be fair, proponents of AI chatbots say we should just be patient because the technology will keep improving. GPT-5, expected to come out late this year or early next year, will “process and generate images, audio, and potentially even video” in addition to handling text, as PC Guide put it.
I am skeptical because these new features would do nothing to alleviate hallucinations. Even if future generations of AI chatbots are trained on more data and somehow develop a richer representation of the world, they’ll still have the same underlying problem: a lack of integrity. That’s why they fake responses. Generative AI guesses a plausible next word in a sentence. Sometimes it will guess right, and sometimes it will guess wrong.
The need for a killer app
For generative AI to meet the high expectations for it, business leaders must discover and deploy a killer app — something that gives many people an overwhelming reason to use the new technology. The killer app for the personal computer was the electronic spreadsheet. The iPod’s was the iTunes store.
Most people using generative AI are doing it to help them overcome, say, writer’s block as they compose an email. A small number of companies are using AI to boost the productivity of business processes such as sales, customer service, and coding. This phenomenon is especially striking in the video game industry, which has seen growth dry up since 2020. Many companies are losing money, finding it hard to raise capital, and laying off people. But because AI can produce images and write code, it’s enabling companies to develop new games with far fewer team members. To lower the cost of building games that might not succeed in the market, one video game developer is reducing the size of the average development team by 80 percent, to about 20 to 25 people.
But such cost cutting will not ever add $7 trillion to global GDP. That kind of transformation will only happen if companies use generative AI to create new sources of growth.
Until those arise, you should be skeptical about claims that this technology is about to change the world.
Peter Cohan is the founder of the Peter Cohan & Associates strategy consulting firm and an associate professor of management practice at Babson College. His most recent book is “Brain Rush: How to Invest and Compete in the Real World of Generative AI.”