Twenty-five years ago, the fear of the Y2K bug had gripped the world and India was no exception. The fear was that computers would crash when the year 1999 ended due to a misunderstanding of how years were stored. Computers were programmed to store two digits for the year, so a year written as 00 could be read as 1900 instead of 2000. It proved to be a false alarm, but 2000 became a turning point for India’s IT industry.
Come 2024, and people were quite similarly engrossed with the fear of artificial intelligence (AI). True, the technology has the potential to reshape virtually every aspect of human life. From healthcare and education to manufacturing, logistics, and even entertainment, this emerging technology is revolutionising industries, improving efficiencies, and opening up new possibilities. Then why the fear? This is because it’s the first technology in human history which has the potential to get out of human control. The debate that dominated conversations during 2024 and will continue to do so in subsequent years is: Is AI a blessing for mankind, or will it turn out to be a Frankenstein? This was best summarised by Prime Minister Narendra Modi when he pointed out that global security will face a big threat if AI-laced weapons were to reach terrorist organisations.
Electronics and IT minister Ashwini Vaishnaw has posed four major issues as AI gains speed and prominence: Fair compensation for content creators, algorithmic bias of digital platforms, impact on intellectual property and the last, which arises as a result of these three, whether the safe harbour provisions, which grants legal immunity to social media platforms, should be revisited.
Aero India 2025: SAAB’s Vision and Focus on AI-Driven Innovation
Building LLMs: A strategic necessity
60% APAC firms see AI benefits in 2-5 years: IBM survey
Election manifestos need a dose of AI
These apprehensions are not without basis. Google’s generative AI platform Gemini became a cause of embarrassment for the tech major for throwing biased responses on questions relating to history, politics, gender and race. The Indian government saw red to a response which suggested that the Prime Minister is a fascist. However, the same question relating to Ukrainian President Volodymyr Zelensky and Donald Trump, elicited quite diplomatic answers. Similarly, the deepfake video of actor Rashmi Mandanna, followed by those of several celebrities across the spectrum, was a preview to what this technology could do even to ordinary citizens in the course of time.
Moving away from fear and apprehensions, the year also saw a debate on the positive features of the technology and the manner its benefits should be harnessed. Large language models (LLM), the basic structure on which AI models are built which can perform a wide range of tasks, led to a sharp division of views. Infosys’ co-founders, NR Narayana Murthy and Nandan Nilekani came out against India getting into the race for building LLMs, and instead focusing on use cases, which can be done by small language models (SLM). The duo voiced a strong logic for the same: LLMs need to be trained on large data sets, which takes several years and Big Tech firms have gained a huge lead. There’s no point reinventing the wheel and getting into this race with limited resources. It’s better for Indian firms to focus on SLMs which are use-case specific, went the reasoning.
This was countered by Google Research India head Manish Gupta, who felt that building foundations are required to build use cases. Gupta cited the case of Aadhaar, which Nilekani built, where first, the foundation was built and later the use cases. There are a few cases of Indian firms building and unveiling LLMs, but majority are focused on use cases. Though the jury is still out on the preferred course to be adopted, indications are that it would be a mix of both, with more success in SLMs than LLMs.
In a country like India, the impact on jobs of such technologies is another critical area which continues to be debated with passion. While Luddites present an extreme view about the dangers of large-scale job losses, progressives are a bit sceptical. Rishad Premji, executive chairman of Wipro, summed it up well when he pointed out that AI’s transformative potential is expected to disrupt the labour market, with tasks rather than job roles becoming the focal point.
The general consensus is that routine, monotonous jobs would face the axe, but skilled ones will be in demand. As a result, what’s prescribed is constant upskilling to remain relevant in the age of AI. But, is it really so? Historian Yuval Noah Harari threw a spanner in such prescriptions when in his book, Nexus, he illustrated how some routine functions are harder to automate than skilled ones. For example, it’s easier to automate chess playing than, say, dishwashing, he pointed out. Similarly, society may today put a premium on the job of doctors and discount nurses, but the fact is that AI has the potential to automate the job of the former but not the latter.
With such high stakes in all spheres of life, it’s natural that the year saw all sections of the society engrossed in how AI would unravel. Deepfakes, cybersecurity, data thefts, etc fall on the negative side while harnessing the technology for education, healthcare, agriculture can bring immense benefits. The government surely has a role by way of bringing a regulatory policy. However, 2024 saw governments across continents grapple with how to build a regulatory framework to govern its usage. While each country needs to do its own bit, AI policy and regulation will also need a global consensus as the technology is not restricted by geographical boundaries. This is the debate which will continue in 2025.