As GenAI hype cools down, tech competitiveness heats up

As GenAI hype cools down, tech competitiveness heats up

Open this photo in gallery:

Developments in generative artificial intelligence (GenAI) have been challenging but essential for business decision makers to follow.

Despite the sophistication and advancement of the technology, questions remain unanswered about the legalities, ethics and commercial implications. “On one hand, the hype around GenAI among the general public has absolutely died down because it’s not a new concept any more,” says Edward Tian, chief executive officer of GPTZero, a provider of AI-detection software.

“On the other hand, competitiveness between tech companies has never been greater, and we are seeing a major push of AI implementation virtually everywhere, from the workplace to our personal devices. Amidst all of this, there is still a lack of legal guidelines to instruct individuals and organizations on how AI should and should not be used, leaving many people feeling very uncertain about what to do with it.”

According to Statistics Canada, one in seven Canadian businesses were using GenAI or they had plans to use it as of the first quarter of 2024. The most enthusiastic adoption was in the information and cultural industries – music and film production, publishing, broadcasting, data service and telecommunications – where one in four said they were already using the technology.

“The most pragmatic applications of GenAI are in content generation, customer-service automation and data analysis,” says James Allsopp, CEO of digital marketing agency iNet Ventures. “For instance, AI-powered chatbots are revolutionizing customer interaction by providing immediate, accurate responses, while AI-driven analytics tools are enabling companies to derive actionable insights from vast amounts of data.”

The applications, he stresses, “are not just theoretical, they’re delivering real value to businesses today.” As someone who has worked in this space for more than a decade, guiding clients through the adoption of AI technologies, Mr. Allsopp is both enthusiastic and cautious.

“The surge in AI-driven tools has been fuelled by a growing recognition of AI’s potential to enhance efficiency, innovation and decision making,” he says. “However, this rapid adoption has also highlighted the urgent need for governance frameworks, especially in markets like Canada where formal regulations are still on the horizon.”

Canada is expected to regulate AI at the federal level through the proposed Artificial Intelligence and Data Act, part of the larger consumer privacy protection bill, C-27. Until then, it’s up to individual organizations to design their own governance regimes.

“Governance is crucial in ensuring that GenAI is implemented responsibly. A strong governance framework helps mitigate risks related to bias, privacy and security, and ensures that AI deployments align with the company’s ethical standards and strategic objectives,” Mr. Allsopp says.

“Without proper governance, the risks associated with AI could outweigh the benefits.”

Stefan van der Vlag, CEO of Clepher, has witnessed these risks firsthand with clients that use the firm’s AI-powered chatbot creation and marketing tools.

“In my experience, many organizations still struggle with establishing effective governance foundations for AI implementation,” he says, chalking it up to the complexity and novelty of the technology and a lack of clear regulations and standards.

Good governance, for Mr. van der Vlag, exists to ensure technology is used ethically and responsibly.

“It involves setting clear guidelines, processes and accountability for all aspects of AI implementation, from data collection to decision making. Effective governance can also help build trust with stakeholders and mitigate potential risks. Effective governance can help organizations avoid negative implications such as public backlash or legal consequences.”

He says the most effective governance strategies he’s seen involve a multipronged approach. It includes diverse stakeholders, regular audits to identify biases or errors in algorithms, and prioritizing transparency around data and algorithms that are used. The work should also be grounded in “a clear code of ethics and [with] oversight mechanisms in place.”

For Dr. Kjell Carlsson, head of AI strategy at Domino Data Lab and a specialist in artificial intelligence and machine learning on an enterprise-wide level, 2024 was “the year when most companies realized that there is no ‘easy button’ when it comes to transforming with AI.”

According to Dr. Carlsson, there’s been a common realization across companies that have adopted these tools, whether it involves allowing developers to use them to write code or customer-service chatbots, or pharmaceutical companies using them to help develop new drugs.

“Unfortunately, nearly all have discovered that there are far fewer high-ROI, well-defined GenAI use cases than were originally expected, and that they need advanced data science and MLOps (machine-learning operations) capabilities in order to develop and deploy production-grade GenAI solutions,” Dr. Carlsson says.

It’s why, he adds, he feels we’re now in the “trough of disillusionment” in the AI hype cycle.

“Companies have realized that the bench of obvious use cases is far shorter than expected, and that the effort and capabilities required to put them into production is far greater than expected,” he says. “Gone are hopes that GenAI can replace rapidly large numbers of human tasks.”

Instead, he says, a smaller set of use cases is emerging. For example, in customer service, delivery company Bolt uses AI for almost all of its client “chat” interactions. And humans can process huge amounts of data to do everything from writing code to summarizing research reports.

“Thus,” Dr. Carlsson says, “it is no accident that the companies that have been the most successful with GenAI are the companies that were already far along on their ML journeys and have built out advanced data science and MLOps people, processes and platforms.”

Whatever you’re using it for, he adds – echoing other experts – good governance remains vital, and it needs to be practical.

“There has arguably been too much focus on AI ethics and not enough focus on governance. While ethics is important, only a small share of AI use cases have ethical considerations and most of those use cases are either already heavily regulated or will not be pursued by most enterprises, [for example] conducting fraud, cybercrime or creating deepfakes,” Dr. Carlsson says.

“In contrast, governance is important for minimizing all types of risk – not just ethical risk, but business and legal risk – for all AI use cases. When companies focus on ethics, they usually start top down and never get to the level of ongoing implementation.”

That’s why Dr. Carlsson says AI governance needs to start from “the ground up,” rooted in the actual tool and process at hand, and iterated on daily.

“It is less about ethics and more about enforcing best practice. It is also less about increasing prevention and more about accelerating adoption. Executive committees, while necessary, are less important than controlling access to data and models and applying policies. It is about the hard work, done by people, enabled with good tools, following best practices, visible to others who can validate and assist.”

He concludes: “That holds for both good AI governance and driving transformation with GenAI.”

Originally Appeared Here