The Centre has sharply tightened its online content rules, directing social media platforms to remove unlawful content within three hours of being notified, replacing the earlier 36-hour window.
The amended rules, notified on February 10, will come into effect from February 20 and modify the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Under the revised framework:
Social media companies must act within three hours of receiving government or authorised notices on unlawful content. The earlier compliance window of 36 hours has been reduced drastically.
The directive applies to global platforms operating in India, including Meta, YouTube and X. The government has not specified the reasons for shortening the deadline.
India, which has more than one billion internet users, is one of the world’s largest digital markets. The move reinforces its position as one of the most assertive regulators of online content, placing added compliance pressure on global technology firms.
Legal experts and industry representatives say the three-hour deadline could be difficult to implement in practice.
An expert said it would be “practically impossible” for companies to remove content within such a short timeframe in many cases. According to him, the rule leaves limited room for independent assessment before compliance.
A social media executive said the amendment was introduced without adequate consultation and noted that international standards generally provide longer timelines for content removal.
Meta declined to comment on the new rules. X and Google, which operates YouTube, did not immediately respond.
India’s IT rules empower the government to direct intermediaries to remove content considered illegal under various laws, including those related to national security and public order.
In recent years, authorities have issued thousands of takedown orders. According to transparency disclosures, Meta restricted more than 28,000 pieces of content in India during the first six months of 2025 following government requests.
The revised norms also dilute an earlier proposal that required AI-generated content to be labelled across 10 percent of its surface area or duration. Instead, platforms must now ensure that such content is “prominently labelled”.
The amendment comes amid growing global scrutiny of social media companies, with regulators in several jurisdictions demanding faster removals and greater accountability for online content.