On April 15, 2026, the Gujarat High Court (HC) issued notices to Meta, Google, X, Reddit, and Scribd in a public interest litigation (PIL) focused on the spread of AI-generated videos and deepfake content. The court directed these intermediaries to file responses by the next hearing on May 8, framing the dispute less as a lack of regulation and more as a question of implementation—including how quickly platforms act on lawful takedown requests and how uniformly they comply with existing obligations.
For technologists and platform operators, the case matters because it ties together several moving parts: AI-generated content labeling rules introduced by India’s central government, the operational status of the SAHYOG portal for coordination with law enforcement, and court-level expectations around “strict enforcement and uniform implementation” of the statutory regime. While the underlying subject is public order, the immediate technical and operational impact lands on how platforms manage synthetic media, respond to government notices, and structure compliance workflows.
What the Gujarat HC case targets: AI-generated video distribution on platforms
According to the Inc42 Media report, the PIL flagged “widespread creation and circulation of AI generated videos on digital platforms” as posing a “serious threat to public order and functioning of a healthy democracy.” The petition also asked for an “immediate requirement to curb the creation and use of such AI deepfakes,” arguing that such content can penetrate social fabric and lead to “irreversible situations.”
From a technology perspective, the case is directed at intermediaries that host or distribute AI-generated content—rather than at creators alone. The court’s notice list includes global platforms (Meta, Google, X, Reddit) and two additional services (Scribd), indicating that the compliance expectations are intended to span multiple content ecosystems and user interaction models.
The petition’s legal argument, as summarized by Inc42 Media, contends that existing frameworks—specifically the Information Technology Act, 2000 and provisions under the Bharatiya Nyaya Sanhita (BNS)—are inadequate to regulate the creation and dissemination of deepfakes effectively. However, the court’s focus during the hearing shifted toward enforcement mechanics.
Implementation over new rules: the court’s enforcement framing
During the hearing, both the Central and the Gujarat governments maintained that the legal framework is already in place. The report says they pointed to gaps in enforcement driven by delays and non-compliance by intermediaries.
The HC, in response to these submissions, observed that the core issue lies in implementation rather than the absence of regulation. The division bench—chief justice Sunita Agarwal and justice DN Ray—said the “issues which need consideration… is about the strict enforcement and uniform implementation of the existing statutory regime,” as quoted in the report.
This enforcement framing has practical implications for platform engineering and operations. Even where policy requirements exist, the burden shifts to building and maintaining processes that can reliably convert government directions into timely, auditable actions: content removal workflows, notice handling, internal escalation, and user-level or account-level measures where applicable.
The court’s interim directions also required intermediaries to ensure onboarding onto the government’s SAHYOG portal to enable real-time coordination with law enforcement agencies for time-bound takedown of unlawful content. Inc42 Media notes that SAHYOG has been operational since October 2024 and is designed to connect law enforcement agencies with intermediaries on a single platform.
SAHYOG and compliance metrics: what the government says platforms are (not) doing
Inc42 Media reports that the union ministry of home affairs said some platforms, including Meta and Google, have improved compliance, while others lagged in onboarding and responsiveness. The ministry’s statement, as relayed by the report, says that although partial action has been reported, a “low rate of formal responses” results in a lack of meaningful cooperation with lawfully issued directions.
The ministry further argues that such conduct amounts to breach of “enhanced due diligence obligations” and “severely impedes” law enforcement’s ability to ensure timely removal and to carry out effective investigations. While the report does not specify the technical tooling behind those obligations, the reference to “formal responses” and timeliness suggests that platforms are expected to manage government notices as structured events rather than informal requests.
A concrete metric in the report concerns X. The Centre flagged non-responsiveness by X, stating that out of 94 intimations issued between 2024 and 2026, formal responses were received in only 13 cases. For compliance teams, this kind of ratio can translate into higher operational scrutiny, more frequent escalation, and tighter integration between notice ingestion systems and enforcement actions.
The report also describes a prior enforcement event in January, when MeitY pulled up X and directed it to remove obscene and unlawful imagery generated using its AI chatbot Grok, warning of legal action for non-compliance. It says the ministry had sought a detailed report on takedowns, user-level actions, and compliance measures at the time. That sequence—ministry direction, reporting requests, and now court notices—illustrates how regulators can build a record of compliance (or lack of it) over multiple incidents.
Broader regulatory timeline: labeling synthetic content and tightening takedown timelines
This Gujarat HC case arrives amid increasing scrutiny of how platforms handle AI-generated content in India. Inc42 Media reports that the central government amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 to bring AI-generated content under India’s regulatory ambit. The updated rules are described as effective February 20, 2026, defining “synthetically generated information” and mandating clear labeling of such content.
In addition, the report says that earlier in March, MeitY pushed to tighten compliance timelines for these companies. Proposed changes, issued under Section 87 of the IT Act in March, would require social media intermediaries to “comply with clarifications, advisories, orders, directions, standard operating procedures, codes of practice or guidelines” issued in relation to implementing the rules.
The proposed framework in the report also includes a faster takedown obligation: platforms hosting information that may be used to “commit unlawful acts” would be required to remove such content within three hours of receiving government directions. The report further states that MeitY extended the deadline for submitting feedback and comments on the recently unveiled draft amendments to IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, allowing stakeholders to submit feedback until April 29 from an earlier April 12 date.
Taken together, the Gujarat HC’s emphasis on uniform enforcement and the government’s parallel move toward labeling and faster takedown timelines suggest a regulatory direction that is procedural as well as substantive. Even when the underlying requirement is “remove unlawful content,” the operational challenge is how to detect, classify, and act within defined time windows—while maintaining the ability to produce formal responses and investigative support.
As an analysis point (based only on what the report states), observers may watch for whether onboarding to SAHYOG becomes a gating factor in compliance assessments, and whether the three-hour takedown concept—discussed in the draft amendments—aligns with the real-time coordination described in the court’s interim directions.
Why this matters for AI and platform engineering
Deepfakes and AI-generated media create an engineering problem: synthetic content can be produced at scale and disseminated quickly. The technology story in this case is not about whether AI can generate realistic media; it is about how platforms operationalize obligations once such content appears on their services.
The Gujarat HC’s notice order makes that operational layer visible. By requiring responses by May 8 and by centering SAHYOG onboarding and enforcement uniformity, the court is effectively asking intermediaries to demonstrate that their systems can translate lawful directions into time-bound, coordinated actions. The government’s reported metrics—such as X’s 13 formal responses out of 94 intimations between 2024 and 2026—also indicate that compliance is being measured, not just requested.
For developers, trust-and-safety teams, and policy engineers, the case underscores the growing intersection of generative AI workflows with regulatory compliance tooling: notice handling pipelines, content moderation routing, labeling obligations for “synthetically generated information,” and the ability to coordinate with law enforcement in a structured, auditable way.
Source: Inc42 Media