Overly strict regulations could hinder AI growth in India, caution experts

Overly strict regulations could hinder AI growth in India, caution experts

As India grapples with the regulatory landscape for artificial intelligence (AI), a sector which has seen rapid development in recent years, experts opine that strict regulations could stifle the country’s burgeoning AI-driven economy.

Currently, India does not have specific laws directly addressing generative AI, such as deepfakes. It has instead introduced a series of advisories and guidelines to encourage the responsible development and implementation of AI technologies.

After a “deepfake” video clip of actor Rashmika Mandanna went viral on social media platforms last year, the Ministry of Electronics and Information Technology (MeitY) asked social media intermediaries to take such content down within 36 hours, a requirement outlined in the IT Rules, 2021.

Plea is court

In December last year, the Delhi High Court asked the Centre to respond to a public interest litigation (PIL) plea against the unregulated use of AI and deepfakes.

Deepfake videos utilise AI to swap the likeness of a person in an existing video with someone else’s. Recently, concerns have grown around deepfake technology, as it can produce highly realistic fake videos that may be misused for spreading misinformation, creating fake news, or generating false narratives.

The petition said while technological development was happening by leaps and bounds, the law was moving at a snail’s speed. The plea said AI has its own deep-rooted challenges and it was necessary to fill the vacuum caused by the absence of regulations.

While the high court is scheduled to hear the petition in July, the MeitY on March 1 issued an advisory saying that all generative AI products, like large language models on the lines of ChatGPT and Google’s Gemini, would have to be made available “with [the] explicit permission of the Government of India” if they are “under-testing/ unreliable”.

The advisory came soon after Rajeev Chandrasekhar, the Minister of State for Electronics and Information Technology, reacted sharply to Google’s Gemini chatbot, whose response to a query, “Is [Prime Minister Narendra] Modi a fascist?” went viral.

However, after the advisory came under criticism from experts for being ambiguous and vague, on March 15, 2024, MeitY issued a fresh advisory, dropping the requirement of obtaining “explicit permission” from the government. The latest advisory said under-tested or unreliable AI products should be labelled with a disclaimer indicating that outputs generated by such products may be unreliable.

Balancing act

Jaijit Bhattacharya, president, Centre for Digital Economy Policy Research (C-DEP) said the advisory is, largely, about following the extant regulations, with an additional qualifier that all AI-generated content that can potentially cause disinformation should be clearly labelled as AI generated.

“This advisory does not really hinder the industry. It stays away from dictating what algorithm to use or hound the start-ups to provide ‘explainability’ or ‘transparency’ of their algorithms,” Mr. Bhattacharya said.

Dr. Amar Patnaik, former MP, noted the Central advisory has remained at the advisory level.

“By not giving legislative backing to this advisory as yet, we have adopted a soft touch approach which I think is required in the Indian context given the manifold use cases for India’s unique problems and aspirations as a global leader on adoption of public digital infrastructure to drive its economic growth,” Mr. Patnaik said.

He said India faces the challenge of balancing responsible AI development with fostering innovation. “Strict regulations risk stifling its growing economy led by AI industry, necessitating a nuanced approach,” Mr. Patnaik said.

Mishi Choudhary, founder of Software Freedom Law Center (SFLC), suggested the government “should be ready to update existing laws to protect public interest and [guard against] future harm associated with the technology”.

She advocated for periodic assessment of approach to AI regulation saying regulators must keep up with the rapid advancements in technology. “It must be dynamic to ensure AI principles are embedded in the organisations deploying AI systems,” she stressed.

Comparison with other countries

On India’s approach to regulating generative AI compared with that of other major economies like the U.S. and EU, Mr. Bhattacharya said, “India is developing its own approach on the matter but as of now, it is more akin to the U.S. approach, where there are no overarching regulations on AI.”

“The EU has recently come up with the Artificial Intelligence Act, which is a comprehensive Act that I also believe puts onerous liabilities on the AI industry, which may slow down the growth of AI in Europe,” Mr. Bhattacharya said.

Mr. Patnaik said, “The country seems to be taking a middle ground between the U.S. and EU models, aiming for responsible AI development without stifling innovation”. However, India’s main challenge is crafting a clear and adaptable framework that can keep pace with the rapid evolution of generative AI, he added.

“In no case should this be left only to companies to self-regulate,” Ms Choudhary stressed.

Future directions

Dhanendra Kumar, the first Chairperson of the Competition Commission of India (CCI), said CCI plans to conduct a market study on AI’s impact on competition, signaling a proactive stance in understanding and addressing AI’s implications for market practices.

“It is likely to be commissioned soon. The CCI is in the process of identifying a suitable agency to undertake it and has just extended the deadline for the bidders. CCI is also working on new analytical tools to upgrade its enforcement mechanism to tackle algorithmic collusion and other market practices in digital space impacting competition,” he added.

Originally Appeared Here