India’s Ministry of Electronics and Information Technology (MeitY) has proposed new changes to India’s IT Rules that would require AI-generated or synthetically created content to carry labels that remain clearly visible for the entire duration of the content. The move is aimed at improving transparency around deepfakes, AI-edited videos, cloned voices, and other synthetic media that can closely resemble real content. The government has also extended the deadline for public feedback on the draft rules to May 7.
At the core of the proposal is a stricter definition and treatment of what the government refers to as synthetically generated information (SGI). This includes not only fully AI-generated content but also material that has been significantly altered using AI tools in a way that could mislead viewers. These include face-swapped videos, digitally manipulated speeches, cloned voice recordings, and AI-generated news clips and images that appear realistic but are not based on real-world events. The concern is that even slight AI alterations, if undisclosed, can distort public perception and be misused for impersonation and misinformation campaigns.
A significant change in the draft rules is the requirement that disclosure labels must remain continuously visible throughout the playback or viewing of the content. This means that AI-generated videos would need persistent on-screen markers or watermarks rather than brief disclaimers at the beginning or end. Similarly, images and posts would need clearly visible indicators that do not disappear when users interact with and scroll past the content. The purpose is to prevent situations where users miss disclosures entirely due to platform design or fast consumption habits.
The government argues that these rules are needed because AI tools have become highly advanced and can now create very realistic fake videos, images, and audio. Particularly, deepfakes can closely replicate a person’s face, expressions, and voice, enabling scenarios where public figures could be made to appear as if they are saying or doing things they never actually did. This creates serious risks of misinformation, election manipulation, financial fraud, and reputational harm.
Along with the labelling requirements, MeitY has also extended the consultation deadline for stakeholder feedback on the draft IT Rules to May 7. This extension is intended to allow more time for technology companies, digital platforms, AI developers, media organisations, and civil society groups to review the proposed changes and provide detailed inputs. The government is expected to evaluate these responses before finalising the amendments, particularly to ensure that the rules are enforceable without placing excessive operational burdens on smaller platforms and startups.
The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →

Ashutosh is a Senior Writer at The Tech Portal, largely reporting on new tech, and intersection of technology and business. Ashutosh’s career spans across nearly a decade of technology writing across multiple platforms and languages.