We’re in the brute force phase of AI when GPUs are needed • The Register

We’re in the brute force phase of AI when GPUs are needed • The Register

AI techniques that require specialist hardware are “doomed,” according to analyst firm Gartner’s chief of research for AI Erick Brethenoux – who included GPUs in his definition of endangered kit.

Speaking to The Register at Gartner’s Symposium in Australia today, Brethenoux said in the 45 years he’s spent observing AI, numerous hardware vendors have offered offer specialist kit for AI workloads. All failed once vanilla machines could do the job – as they always eventually can.

The need for specialist hardware, he observed, is a sign of the “brute force” phase of AI, in which programming techniques are yet to be refined and powerful hardware is needed. “If you cannot find the elegant way of programming … it [the AI application] dies,” he added.

He suggested generative AI will not be immune to this trend.

The good news is, he believes organizations can benefit from AI without generative AI.

“Generative AI is 90 percent of the airwaves and five percent of the use cases,” he noted – and users have already learned that lesson.

Brethenoux described the period from late 2022 to early 2024 as a “recess” in which IT shops “stopped thinking about things that make money” and explored generative AI instead. Those efforts have largely led orgs back to the AI they already use – or to “composite AI” that uses generative AI alongside established AI techniques like machine learning, knowledge graphs, or rule-based systems.

Organizations have realized that AI may already be making a big contribution to the business in many scenarios that engineers appreciate – such as machine learning informing predictive maintenance apps – but which never caught the eye of execs or the board. Recess is over.

An example of composite AI at work could be generative AI creating text to describe the output of a predictive maintenance application. The Register has often heard the same scenario applied to software that analyzes firewall logs and which now uses generative AI to make prose recommendations about necessary actions that improve security – and even writes new firewall rules to enact them.

Brethenoux recalled that some orgs he speaks to still think generative AI can power their next application. He often tells them the same outcome can be achieved more quickly – at lower cost – with an established AI technique.

Gartner’s Symposium featured another session with similar themes.

Titled “When not to use generative AI,” it featured vice president and distinguished analyst Bern Elliot pointing out that Gen AI has no reasoning powers and produces only “a probabilistic sequence” of content. Even so, Elliot said Gen AI hype has reached two to three times the volume Gartner has seen for any previous tech. Generative AI is, in short, being asked to solve problems it was not designed to solve.

Elliot recommended not using it to tackle tasks other than content generation, knowledge discovery, and powering conversational user interfaces.

Even in those roles, he described the tech as “Unreliable like Swiss cheese: you know it has holes, you just don’t know where they are until you cut it.”

Elliot conceded that improvements to Gen AI have seen the frequency with which it “hallucinates” – producing responses with no basis in fact – fall to one or two percent. But he warned users not to see that improvement as a sign the tech is mature. “It’s great until you do a lot of prompts – millions of hallucinations in production is a problem!”

Like Brethenoux, Elliot therefore recommended composite AI as a safer approach, and adopting guardrails that use a non-generative AI technique to check generative results. ®

Originally Appeared Here