India sets 3 hr deadline for social media platforms to take down AI-generated, deepfake content

India sets 3 hr deadline for social media platforms to take down AI-generated, deepfake content



India has directed social media platforms such as Facebook, Instagram, and YouTube to clearly label all AI-generated content and ensure that such synthetic material carries embedded identifiers, according to an official order.

In a stricter enforcement measure, the government has set a three-hour deadline for social media companies to take down AI-generated or deepfake content once it is flagged by the government or ordered by a court.

Platforms have also been barred from allowing the removal or suppression of AI labels or associated metadata once they have been applied, the order said.

To curb misuse, companies will be required to deploy automated tools to detect and prevent the circulation of illegal, sexually exploitative or deceptive AI-generated content.

Additionally, platforms need to regularly warn users about the consequences of violating rules related to AI misuse. Such warnings must be issued at least once every three months, the government said.

ET logo

Live Events

The latest directions come amid growing concern over the spread of AI-based deepfakes online and build on draft amendments proposed last month by the Ministry of Electronics and Information Technology (MeitY) to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.

The draft rules seek to mandate disclosure by users while posting AI-generated or modified content and require platforms to adopt technology to verify such declarations.Social media platforms have already rolled out features allowing users to label certain content as generated or modified using artificial intelligence. The initial focus of the government’s enforcement push is on leading social media intermediaries with five million or more registered users in India, Ministry officials told ET in Novemebr 2025.

YouTube currently requires creators to disclose “meaningfully altered” or synthetically generated content in specific cases, including videos that make a real person appear to say or do something they did not, alter footage of real events or places, or create realistic-looking scenes that never occurred.

Meta has also directed Facebook and Instagram users to label content featuring digitally generated or altered photorealistic audio and visuals, citing examples such as AI-generated conversations, songs created using AI, and reels narrated with AI voiceovers.



Content Curated Originally From Here