India suggests tough IT regulations to identify deepfakes due to AI abuse

India has proposed new **IT rules** to tackle the growing misuse of **deepfakes** and other synthetically generated content. The Ministry of Electronics and Information Technology (MeitY) has unveiled draft amendments to the IT Rules, 2021, which aim to mandate all online platforms to label **AI-generated content**, including deepfakes. This move seeks to enhance public trust and support India's vision of an open, safe, trusted, and accountable internet while balancing free expression and innovation. ### Background and Motivation The surge in **AI-generated content** has significantly impacted social media, with tools like the **Banana Tool** and other platforms creating a vast amount of synthetic information. However, this rise has raised concerns over **data security** and the misuse of fabricated content, capable of misleading users, causing harm, violating privacy, or threatening national integrity. Public figures have sought legal protection against the illegal commercialization of their images and voices in AI-generated deepfakes. ### Proposed Amendments The draft IT rules are designed to establish clear accountability for Social Media Intermediaries (SSMIs) that facilitate or host synthetically generated information. Key aspects of the proposed amendments include: - **Labelling AI-Generated Content**: Platforms will be required to clearly label all AI-generated content, including deepfakes, to help users distinguish authentic from synthetic information. - **Accountability and Grievance Redressal**: The rules aim to ensure that online platforms are accountable for the content they host and have effective grievance redressal mechanisms in place to address user complaints. ### Industry Responses and Initiatives YouTube CEO Neal Mohan has announced the launch of a **Likeness Detection** tool for partner program creators, which automatically detects AI matches of facial likeness, allowing creators to manage and request removal of such content. This tool is part of broader industry efforts to combat deepfake misuse. ### Legal and Regulatory Framework Earlier this year, there were calls for regulation to address the misuse of deepfakes, with social media platforms and stakeholders urging the government to mandate AI labeling standards and grievance redressal mechanisms. The Deepfake Prevention and Criminalisation Bill, 2023, proposed criminal penalties for creating and distributing deepfake media without consent or proper watermarking. As India continues to navigate the challenges posed by AI-generated content, these proposed rules are part of an ongoing effort to safeguard digital spaces and ensure that technology is used responsibly. The initiatives also align with broader efforts to enhance digital identity security, such as the UIDAI's Scheme for Innovation and Technology Association with Aadhaar (SITAA), which aims to combat cyberfraud using AI.

Latest AI News

Stay Informed with the Latest news and trends in AI