India offers 20-year GIFT City tax holiday and 2047 cloud tax exemption to attract global AI and finance hubs

India is tightening the rules for social media platforms, making it much harder for controversial content to remain online for long. Under the newly amended Information Technology Rules, major platforms like Instagram, Facebook, YouTube and X will now be required to take down unlawful and harmful content within ‘three hours’ of receiving a government or court order. In more urgent cases, platforms are expected to act even faster, with an effective response window of about two hours. Notably, the earlier framework allowed up to 36 hours for compliance.

The amendments, notified through an official government gazette, represent one of the most aggressive timelines imposed anywhere in the world for online content removal. They apply to large social media intermediaries operating in India, a country that has emerged as the second-largest internet market for most global platforms. With more than 1 billion internet users and over 750 million smartphone users, content in the country can reach massive audiences within minutes, and the government argues that such a massive scale necessitates faster intervention.

The tightened deadline is primarily aimed at curbing the spread of unlawful material, including content that threatens national security, public order, and individual safety. A key focus of the new rules is the explosion of AI-generated and synthetically manipulated media. Deepfake videos, cloned voices, and digitally altered images have increasingly been used for impersonation, fraud, political misinformation, and harassment. Authorities argue that such content often goes viral in a matter of hours, making the earlier 36-hour window ineffective in preventing real-world harm.

In addition to faster takedowns, the amended rules introduce stricter obligations around artificial intelligence. Social media platforms are now required to ensure that AI-generated or synthetically created content is clearly labelled so users can distinguish it from authentic material. These labels must be prominent and accompanied by technical markers or metadata that help identify the nature and origin of the content. Once applied, these identifiers can not be removed or bypassed by users.

Platforms are also expected to significantly upgrade their internal compliance systems. This includes deploying automated tools to detect AI-generated and deceptive content, maintaining dedicated teams to handle government and court orders around the clock, and ensuring that takedown requests are processed without delay, regardless of time of day.

Importantly, failure to comply with the three-hour rule carries serious legal consequences. Platforms that do not meet the deadline risk losing their ‘Safe Harbour’ status under Indian law. This protection has historically shielded intermediaries from liability for user-generated content, provided they follow government-mandated procedures. Without it, companies could be held directly responsible for content hosted on their services, exposing them to civil claims or criminal proceedings.

The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →