Gujarat High Court Issues Notices to Meta, X, Google on Deepfake Takedown Procedures

This article was generated by AI and cites original sources.

On April 15, the Gujarat High Court issued notices to major technology intermediaries—including Meta India, Google, X, Reddit and Scribd—in a public interest litigation (PIL) seeking tighter controls on the misuse of artificial intelligence for generating and circulating deepfake videos and photographs. The court’s direction centers on enforcing time-bound takedown and traceability obligations under India’s information technology framework, with coordination through a government-run portal.

Court’s Compliance Expectations

According to a report from mint citing PTI, a division bench of Chief Justice Sunita Agarwal and Justice D N Ray issued notices to the intermediaries, returnable on May 8. The bench directed the respondents to ensure they are brought onboard the Sahyog portal, describing it as a coordination mechanism for time-bound action related to takedown of unlawful content, in strict compliance with the Information Technology Act, 2000.

The court stated that “Effective and meaningful responses/action of the respondent intermediaries will be key to the due diligence obligations enforced upon them under the statutory framework,” according to an order passed recently and made available this week.

The litigation’s immediate focus is the intermediary response loop that governs how quickly platforms disable access to unlawful synthetic media and how they support law enforcement investigations.

Sahyog Portal: Coordinated Takedown Workflow

The Union government’s affidavit details the technical approach: in October 2024, the Centre created the Sahyog portal to enable immediate, coordinated and time-bound action against unlawful contents. The portal brings together authorised law enforcement agencies and intermediaries on a single platform.

The stated purpose is swift takedown of unlawful synthetically generated information and access to subscriber information, logs and judicial evidence for identifying offending users.

From an operational standpoint, this standardizes the mechanics of compliance. A shared portal can reduce latency between a lawful notice and the platform’s action, and potentially improve how evidence is packaged for downstream use. The court’s notice process may push lagging platforms into the same workflow, which could reduce the time between notification and content removal for deepfake material.

Compliance Variations Among Platforms

The Union Ministry of Home Affairs (MHA) informed the court that compliance varies among intermediaries. The MHA stated that some intermediaries—including Meta and Google—have improved the speed, efficiency and traceability of compliance actions. Others have not yet been onboarded or fully integrated with the portal.

The MHA specifically noted X and described limited responsiveness to notices about unlawful content, including synthetically generated information. According to the report, a total of 94 notices were given to X regarding unlawful contents, but formal responses were received for only 13 notices.

The MHA also reported that X disabled 788 notified URLs in 2024, 70 in 2025 and 6 in 2026. However, the ministry argued that the low rate of formal responses results in lack of meaningful cooperation with lawfully issued directions.

The ministry’s position is that such conduct amounts to a breach of enhanced due diligence obligations under the Information Technology Rules. The MHA stated this could impede law enforcement agencies’ ability to ensure timely removal of unlawful content and to carry out effective investigations.

The broader implication is procedural: deepfake governance relies on repeatable, auditable responses. If platforms disable content without formal acknowledgment—or do not respond within expected timeframes—authorities may struggle to reconcile takedown actions with evidence needs and investigatory timelines.

The Public Interest Litigation and Regulatory Gaps

The PIL, filed by petitioner Vikas Nair, highlights the widespread creation and circulation of AI-generated videos on digital platforms and frames it as a concern for public order and democratic functioning. The petitioner raised concerns about the government’s approach to framing specific laws or regulatory mechanisms against deepfake and synthetic media.

Nair argued that existing Indian legal provisions—including the Information Technology Act, 2000 and related provisions under the Bharatiya Nyaya Sanhita—are insufficient to effectively regulate the creation, dissemination and circulation of fake and AI-generated videos on digital platforms. He sought a direction for a comprehensive regulatory mechanism to address misuse of AI for generating and circulating fake videos and photographs.

The PIL emphasizes the need for laws that address rapid technological advancement, noting that deepfakes can spread quickly and create impacts that may be difficult to reverse. This aligns with the court’s focus on time-bound takedown coordination via Sahyog.

On February 24, the High Court issued notices to the Gujarat government and the Centre, including the Ministry of Home Affairs and the Ministry of Electronics and Information Technology. The state government’s affidavit described practical issues: lawful notices to intermediaries encounter delays, repeated procedural obligations and non-compliance by certain platforms. The affidavit noted cases where intermediaries failed to provide substantive replies or remove content despite show-cause notices and legal grounds for removal.

These details indicate the dispute centers on execution as much as policy. Even if regulation exists, the court’s focus on due diligence obligations and coordinated takedown workflows suggests that the operational layer—how platforms process notices and communicate actions—may determine whether deepfake misuse can be addressed effectively.

Source: mint – technology