New IT Rules Mandate Labelling for AI Content and Takedown Speed
The Indian government has announced significant changes to its Information Technology regulations, aimed at enhancing the management of artificial intelligence-generated content. Effective from 20 February 2026, these amendments will require social media platforms to label AI-generated materials clearly and act swiftly on illegally posted content, including deepfakes.
Under the new rules, platforms such as Facebook, Instagram, and YouTube must remove content flagged as unlawful within a tighter timeframe. The window for takedown requests has been reduced dramatically, with content deemed illegal by authorities needing to be taken down in three hours, compared to the previous 24-36 hours. For particularly sensitive material, including non-consensual nudity and deepfakes, the deadline is even shorter, set at just two hours.
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 define synthetically generated content as any audio, visual, or audiovisual material that is created or altered in a way that it appears authentic. This aims to address the rise in manipulated content that can mislead users regarding its authenticity.
A senior official from the Ministry of Electronics and Information Technology stated that the regulations include exceptions for minor edits often performed automatically by smartphone cameras. The latest rules are also a response to concerns raised by social media firms about the practicalities of implementing precise labelling; hence, the requirement isn't as stringent as proposed in earlier drafts, which called for labels to cover 10% of the content.
Social media platforms will now bear the responsibility to enforce these labelling mandates, requiring them to seek disclosures from users whose content is AI-generated. Should firms fail to obtain explicit consent for synthetically generated content, they must either label it proactively or remove any deepfake material created without authorisation.
These changes follow a broader trend of tightening regulations on digital content in India, emphasising user safety and accountability. The government has also relaxed prior limitations by allowing multiple designated officers within states to issue takedown orders, streamlining the process for populous regions where a single officer may be overwhelmed by requests.
Failure to comply with these new requirements could result in social media firms losing their safe harbour status, which protects them from legal liability for content shared by users. The rules underline the expectation that platforms must demonstrate due diligence in monitoring and managing content shared on their services.
With the increasing prevalence of AI-generated material and concerns over its potential for misuse, these regulatory changes mark a significant step in the Indian government’s approach to digital content management, aiming to enhance the safety and authenticity of information circulated online.
Former Army Chief Clarifies Controversy Surrounding Memoir Release
Bharatiya Kisan Union Protests India-US Trade Deal, Targets Leaders
India AI Impact Summit 2026 to Showcase Innovations in AI
TMC Proposes Grievance Procedure Before Speaker Om Birla