MeitY Invites Proposal To Build Deepfake Detection Tool

MeitY Invites Proposal To Build Deepfake Detection Tool

SUMMARY

MeitY has invited five proposals for the development of AI tools, guidelines, frameworks and standards to create a trusted AI ecosystem

MeitY said that the deepfake detection should employ deep learning algorithms and incorporate provenance-based techniques to analyse authenticity of media

The Ministry also seeks to develop an AI risk assessment and management frameworks to identify, assess, and mitigate cross-sectoral safety risks

The Ministry of Electronics and Information Technology (MeitY) has sought proposals from startups and other relevant stakeholders for building AI-powered tools to detect deepfakes.

As per the expression of interest (EoI) floated by MeitY, the Ministry has invited five proposals for the development of indigenous technical AI tools, guidelines, frameworks, and standards to create a trusted AI ecosystem. 

“With the increase in realism of deepfakes due to advancements in AI, it is imperative to develop deepfake detection methods that safeguard against potential misinformation and manipulation in the society,” read the document. 

MeitY said that the potential deepfake detection should employ sophisticated deep learning algorithms, incorporate provenance-based techniques and other other detection techniques to analyse the authenticity of media and flag manipulation. 

It also said that the submitted deepfake detection tools should be built in a manner so that they are easily integrated into web browsers and social media platforms to enable automated cross-modal content verification, provide real-time detection and enhance security of the digital ecosystem. 

Another project that the ministry wants to undertake under the IndiaAI Mission is the development of a tool to differentiate AI-generated content from non-AI generated content. The EoI underlined that such a platform embed unique and imperceptible markers in AI-generated content to ensure traceability, security and prevent  the generation of harmful and illegal content. 

The third project includes the development of a platform that can evaluate the ability of AI systems to withstand high-stress scenarios such as natural disasters, cyberattacks, data and operational failures. 

“Stress testing can provide actionable insights into system weaknesses and ensure preparedness for high-stakes situations. This theme focuses on the development of tools and methodologies such as simulation-based testing or stress evaluation metrics specifically designed to fortify the resilience of AI systems under stress,” read the EoI. 

In addition, MeitY has also called on stakeholders to submit proposals for “ethical AI frameworks” that offer a structured approach to ensure that “AI systems respect fundamental human values, uphold fairness, transparency, and accountability, and avoid perpetuating biases or discrimination.

“To enable largescale adoption of AI there is a need to develop comprehensive AI Risk Assessment and Management tools and frameworks designed to identify, assess, and mitigate cross-sectoral safety risks, ensuring failures in one area are contained and managed without affecting interconnected sectors,” added the document. 

The EoI further says that the projects are open for application from startups, India-based academic institutions, R&D organisations as well as private enterprises.

This comes just days after IT secretary S Krishnan reportedly said that the Ministry was building a mechanism to evaluate the safety and trustworthiness of AI solutions. At the time, he also said that the Centre is looking to prioritise innovation while steering clear of stifling regulations.

Notably, the same sentiment was echoed by Union IT Minister Jyotiraditya Scindia earlier this year, saying that deployment of AI ought to be guided by ethical considerations and robust regulatory framework. 

At the heart of all this is the burgeoning Indian generative AI (GenAI) ecosystem, which is home to 200 startups that have raised more than $1.2 Bn in funding between 2020 and the third quarter (Q3) of 2024. 

Originally Appeared Here