As deepfakes and AI-generated misinformation continue to spread rapidly online, the Indian government has taken a decisive step to regulate their misuse. The Ministry of Electronics and Information Technology (MeitY) has released a draft amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, aiming to make AI-generated content more transparent and accountable.
The draft, issued on October 22, 2025, introduces new obligations for social media platforms and digital intermediaries. Under the proposed rules, any content created or modified using Artificial Intelligence must carry a clear label or identifier, ensuring users can distinguish between real and synthetic content.
For the first time, MeitY has formally defined the term “Synthetically Generated Information” — referring to any video, image, or audio created using artificial intelligence or machine learning.
According to the draft, any platform that creates or distributes such AI-generated content must ensure it carries a visible or audible watermark or permanent metadata tag. This identifier should make it unmistakably clear to users that the material is artificially created and not authentic.
The ministry’s objective is to prevent misinformation, identity fraud, and manipulation—issues that have become increasingly prevalent with the rise of deepfake technology and generative AI tools.
The draft mandates that all social media and digital platforms offering AI-based content creation or modification features must strictly label such content.
Specific labeling guidelines have been proposed:
For videos and images: The watermark or label must cover at least 10% of the visible content area, ensuring it cannot be easily ignored or removed.
For audio content: The disclosure must be clearly audible during the first 10% of the recording so that listeners are immediately informed that the content is synthetic.
Moreover, the metadata or digital identifier associated with the AI content must be permanent and tamper-proof, meaning it cannot be deleted or altered once published.
The proposed rules apply to Significant Social Media Intermediaries (SSMIs) — platforms with more than 5 million (50 lakh) active users in India. This category includes major players like Facebook, Instagram, YouTube, X (formerly Twitter), and Snapchat, among others.
These large platforms will face stricter compliance requirements, including transparency obligations, technical safeguards, and user reporting mechanisms.
MeitY stated that the draft aims to keep India’s internet ecosystem “open, safe, trusted, and accountable.”
The ministry emphasized that the misuse of generative AI and deepfake technologies has led to a surge in misinformation, identity manipulation, and even potential election interference. False visuals and synthetic voices have been used to spread propaganda, damage reputations, and mislead citizens on a large scale.
By enforcing clear labeling norms, the government hopes to empower users with awareness, discourage malicious actors, and make digital platforms more responsible for the content they host.
Experts believe that these amendments could set a global precedent, as countries worldwide grapple with similar challenges posed by generative AI tools.
To ensure transparency, MeitY has invited public feedback and stakeholder suggestions on the draft. Citizens, social media platforms, technology experts, and civil society groups can submit their comments until November 6, 2025.
Suggestions can be sent via email to itrules.consultation@meity.gov.in. The ministry will review all inputs before finalizing the amendment to the IT Rules.
With the rise of generative AI, deepfakes have blurred the line between truth and fabrication. From political propaganda to financial scams, the misuse of AI-generated content poses a real threat to digital trust.
By proposing mandatory labeling, traceable metadata, and platform accountability, India is taking a proactive step to ensure that technology serves transparency, not deception.
If approved, MeitY’s new framework could mark a major shift in how AI-generated content is regulated—making India one of the first countries to legally require clear identification of synthetic media across digital platforms.