We use cookies to personalise the website and offer you the greatest added value. They are, among other purposes, used to analyse visitor usage in order to improve the website for you. By using this website, you agree to their use. Further information can be found in our data privacy statement.



Beyond the Draft: India’s Amended IT Rules for Synthetically Generated Information

​​​​​​​​​​​​​​published on 23 Februar 2026 | reading time approx. 5 minutes​​​


In our November​ issue, we unpacked the draft Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025 (“Draft Amendment Rules”). In this follow-up, we dive into the notified Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (“Amendment Rules”), contrasting them with the Draft Amendment Rules to highlight standout shifts and what it means for players in the tech sector. 


Notified by the Ministry of Electronics and Information Technology (MeitY) on February 10, 2026, and effective February 20, 2026, the final Amendment Rules further refine governance of AI on intermediary platforms, incorporating feedback received from the public consultation post the Draft Amendment Rules, to balance innovation and accountability. 

Some of the key differences from the Draft Amendment Rules are:

Definition and Scope of Synthetically Generated Information​​

The Draft Amendment Rules took a broad approach to Synthetically Generated Information (SGI), covering all types without limiting it to audio-visual content or offering exemptions. They also imposed strict labeling rules, such as requiring visual SGI labels to cover 10 per cent of the screen area. The Amendment Rules narrow the scope by defining SGI as audio, visual, or audio-visual content artificially or algorithmically created to appear real, depicting people or events deceptively, thereby keeping text-based AI outputs outside the scope of SGI. Furthermore, the final Amendment Rules switch to flexible, principle-based labeling wherein intermediary platforms just need to ensure prominent, easily noticeable labels are displayed that users can readily perceive. 

The Amendment Rules also provide clear exemptions through a proviso inserted in the definition that excludes routine good faith uses. These carve-outs cover basic editing like color correction, formatting or noise reduction; accessibility enhancements such as transcription, subtitling, or translation; and preparatory materials like PDFs, research docs, or training aids that do not fabricate reality or false records. This precision prevents stifling everyday AI tools, but platforms must evaluate "good faith" intent, elevating their role in content governance.

These shifts reflect public consultation feedback, reducing overreach while introducing stricter enforcement for harmful SGI like deepfakes or non-consensual imagery.

Strengthened Due Diligence Obligations on Intermediaries​​

User Notifications Every 3 Months​​​

Intermediaries like social media platforms must now notify users every 3 (three) months (as opposed to annual notification requirement under the IT Rules, 2021) via terms of service or privacy policies in English or Indian languages. The notification should inform users that the intermediary can suspend accounts for policy violations, impose legal penalties under the IT Act for illegal content, and report serious crimes (like those under Protection of Children from Sexual Offences (POCSO) Act, 2012 or the criminal codes) to authorities. Intermediaries handling AI or SGI must add warnings about potential civil/criminal liability for misuse, citing laws on deepfakes, child protection, elections, and women's rights.

Proactive "Suo Motu" Duty on Synthetic Content​​

A major change is the proactive "suo motu" duty wherein intermediaries must hunt down and act on unlawful SGI even without complaints, by removing it, suspending accounts, aiding victims, and notifying concerned authorities.

Tech Tools to Block Prohibited AI Content​​

Intermediaries must also adopt "reasonable" tech tools, like AI filters, to block creation/sharing of prohibited AI content upfront, including child sexual abuse material (CSAM), non-consensual nudes, deepfakes deceiving people, fake documents, or guides for explosives/arms.

Additional Obligation for SSMI 

Significant Social Media Intermediaries (SSMI), i.e. intermediaries with more than 5 million users must require users to declare if content is AI-generated before displaying, uploading, or publishing it. This must then be verified using appropriate tech tools like automated detection (checking metadata or signals), and the declaration must be clearly and prominent displayed as a label marking it as SGI. Furthermore, the IT Rules, 2021 only required SSMIs to "endeavor" using tech measures to spot and block re-uploads of removed child abuse or rape depictions, notifying users of restrictions. Now, the Amendment Rules make it mandatory for SSMI to deploy "appropriate" automated tools for proactive due diligence on all SGI content. Now, if an SSMI knows about, allows, promotes, or ignores undeclared/mislabeled synthetic content breaking rules, it loses its safe harbour protection under the Information Technology Act, 2000 (“IT Act”).

Accelerated Takedown and Grievance Timelines​​

A core change brought about is the shortened response and takedown windows. Government or law-enforcement takedown orders under Rule 3(1)(d) now demand action within 3 hours, down from 36 hours. Grievance resolution drops from 15 days to 7 days, with sensitive content like dissemination of intimate images requiring 2-hour acknowledgment (from 24 hours) and 36-hour takedown (from 72 hours). These reduced timelines, missing from the Draft Amendment Rules, apply to all intermediaries requiring system upgrades to comply with the Amendment Rules.

Implications for Intermediary Platforms ​​

The changes brought through the Amendment Rules transform intermediaries from passive hosts to proactive monitors, aligning with global standards like the EU AI Act whilst tying compliance to safe harbor. The tighter timeline limits may spur over-removal, impacting expression and may cast a chilling effect on free speech. However, these rules are necessary, especially after the recent X-Grok incident wherein AI-generated deepfakes of sexually explicit content were disseminated on the platform and left unchecked. This forced MeitY to step in with a letter directed to X​ to take immediate action against the misuse of Grok and other AI tools on its platform. Such instances highlight the chaos caused by unlabeled SGI content and how it misleads users, breaching trust on the platform and the governance framework.

While compliance with the Amendment Rules poses operational challenges for intermediaries, it remains vital. Intermediaries may therefore consider auditing detection tools, training staff on the new requirements, and testing automated systems to ensure they adopt technical measures that are compliant with the new requirements.​

Tech & Data Bites

author

Contact Person Picture

Prarthana Vasudevan

Consultant

+91 80 44784 803

Send inquiry

Rödl INDIA

Discover more about our offices in India​​. Re​​a​d more »
Skip Ribbon Commands
Skip to main content
Deutschland Weltweit Search Menu