Pixels of Panic: Deepfakes, Crises, and the Legal Blackhole in India
In the age of information warfare, truth is no longer the first casualty; it is the battlefield itself. During Operation Sindoor, India witnessed not only the might of military precision but also the shadows of parallel misinformation campaigns. Among the most dangerous tools in this arsenal are deepfakes, AI-generated videos or audio that make people appear to say or do things they never actually said or did.
What makes deepfakes particularly alarming in conflict situations is their ability to bypass rational filters and provoke panic. Unlike fake news articles or WhatsApp forwards, deepfakes appear to be real. For the digitally illiterate or older generation, these forgeries can be indistinguishable from reality. Yet, as deepfake technology becomes more sophisticated, India’s laws remain largely silent on how to regulate or even define it. This article examines the dangers of deepfakes during crises, the gaps in India’s legal system, and how the country can prepare before misinformation overwhelms the public’s trust.
Deepfakes 101: Real Faces, Fake News
Deepfakes are created using machine learning techniques such as generative adversarial networks (GANs), which can synthesise realistic human faces, voices, and gestures. While the technology has legitimate uses in entertainment, accessibility, and education, its misuse has been far more visible. In 2022, during the Russia-Ukraine conflict, a video surfaced showing Ukrainian President Volodymyr Zelenskyy allegedly calling for his troops to surrender. Though quickly debunked, the clip spread across social media, sowing confusion in a population already under siege. Similar patterns are emerging globally: fake leader speeches, doctored footage of bombings, and fabricated crisis updates.
Similarly, in India, where WhatsApp forwards already dominate the misinformation landscape, adding deepfake visuals into the mix creates an even more potent threat. This was seen with fake visuals of multiple Fighter Planes being shot down and the pilots being captured by the Enemy forces during the Fog of war.
Weaponising Confusion: Deepfakes in Times of Crisis
During wars or natural disasters, people crave information, any information, that helps them make sense of unfolding chaos. This makes crises fertile ground for disinformation. Deepfakes can simulate military losses or fabricated enemy victories, leaders making inflammatory statements to trigger unrest or civilian panic footage to undermine public morale.
Older citizens or those without digital literacy are particularly vulnerable. If a video looks authentic, they are more likely to forward it than verify it. In Operation Sindoor, a handful of fake clips circulated online, claiming to show destroyed military assets. Although experts debunked these, the speed at which they spread underscored how quickly misinformation can outpace fact-checking. The psychological impact is profound: even after debunking, doubt lingers. Once people see a leader’s face or hear their voice in a false context, trust in genuine communication erodes.
Why do deepfakes succeed where fake news sometimes fails? Because humans instinctively trust visual evidence. Seeing is believing, even if what we see is false. Three psychological factors amplify the danger:
- Visual trust: People process images and videos as stronger evidence than text.
- Crisis vulnerability: During uncertainty, individuals are more likely to accept alarming content at face value.
- Truth decay: Repeated exposure to deepfakes erodes trust in authentic media, making citizens doubt even genuine communications from authorities.
For the elderly or digitally untrained, the danger is doubled; they may not have the literacy or scepticism to question such material.
India’s Toolbox: Fragmented and Outdated
Despite the growing menace, India does not have a dedicated law on deepfakes or synthetic media. The current legal framework is patchy at best, as seen in the Information Technology Act, 2000, but it is ill-equipped to address AI-generated media specifically. Similarly, the Digital Personal Data Protection Act, 2023, protects personal data but does not define or regulate synthetic or manipulated content.
The Intermediary Guidelines (2021) require platforms to take down unlawful content upon notification, but there is no requirement for proactive monitoring. However, at the time of the publication of this article, the Ministry of Electronics and Information Technology has come up with a proposal to require creators to disclose the use of AI in their content. Further, this proposal would require platforms to identify manipulated media and apply a clear label indicating that a piece of content is synthetic.
The Platform Paradox: Who Guards the Feed?
Much of India’s misinformation travels through platforms like WhatsApp, YouTube, Instagram, and X (formerly Twitter). Yet these platforms act only after damage is done. Their current gaps include a lack of watermarking or detection tools in Indian languages, no real-time moderation during crises and delayed takedowns that allow harmful content to circulate widely before removal.
The Intermediary Guidelines, 2021, provide traceability obligations for messaging apps but stop short of requiring proactive detection. In an emergency, this delay could fuel widespread panic. The question remains:
Should platforms have a heightened duty of care in times of war, disasters, or elections?
The Global Standard: What India Can Learn
The other countries are not standing still. The EU AI Act (2024) requires AI-generated content, including deepfakes, to be clearly labelled. Political deepfakes are explicitly flagged as high-risk. Similarly, China’s Deep Synthesis Regulation (2023) governs that all AI-generated content must be watermarked and tagged. Non-compliance carries penalties for both creators and platforms. In the United States, several states have introduced laws banning political deepfakes close to elections. Federal proposals are under debate for watermarking synthetic media. However, India has no equivalent safeguards. Without proactive regulation, the country risks importing not just the technology, but also its harms.
India faces an urgent need for a comprehensive strategy to combat deepfakes. Piecemeal measures, such as isolated takedowns or weak advisories, are insufficient. Instead, a robust defence requires four interdependent pillars: modernising the legal framework, enforcing platform accountability, fostering digital literacy, and ensuring swift access to justice. The failure of any pillar weakens the entire system.
The First Pillar: Legislative Clarity
India’s current laws are outdated and ambiguous, creating a “legal blackhole” for malicious actors. A modern legal framework must begin by defining “synthetic media” and “deepfakes” in the Information Technology Act, 2000. This definition allows the creation of specific offences criminalising the production and intentional dissemination of harmful deepfakes, covering threats such as public disorder, financial fraud, election interference, and reputational damage. A clear, tiered liability structure should hold creators, sharers, and platforms accountable, giving law enforcement and courts the clarity needed to assign responsibility effectively.
The Second Pillar: Platform Accountability
Social media platforms must act as responsible intermediaries, not passive conduits. Regulation should require AI-generated content to carry visible watermarks and machine-readable signals, while platforms must deploy practical detection tools across all major Indian languages, subject to independent audits. Additionally, “Crisis Protocols” should trigger during national emergencies, civil unrest, or elections, ensuring faster takedowns, heightened moderation, and proactive content management integrated with national emergency systems.
The Third Pillar: Resilient Citizenry
Even the best laws and technology are ineffective if the public cannot recognise misinformation. India must invest in a sustained, multilingual digital literacy mission that equips citizens with practical verification skills through schools, public campaigns, and community workshops. Support for independent fact-checking organisations, especially those operating in regional languages, is essential for rapid debunking and maintaining a healthy information ecosystem.
The Fourth Pillar: Justice
Traditional judicial processes are too slow to counter the rapid spread of deepfakes. Courts should strengthen the use of dynamic “John Doe” injunctions to target unknown perpetrators and emerging infringing content. Fast-track mechanisms within existing judicial structures should allow expedited relief, enabling victims to secure prompt content removal and interim remedies to mitigate ongoing harm.
These four pillars must function as an integrated system. Laws are ineffective without platform enforcement, platforms are limited without a digitally literate public, and citizens need timely legal recourse when harmed. Only a coordinated, holistic approach can provide India with a robust shield against the growing threat of digital deception.
Conclusion
The battlefield of the future is not only territorial, but also digital. Deepfakes in times of crisis threaten to destabilise trust, sow panic, and manipulate public opinion at a scale never seen before. India’s laws, however, are yet to catch up. Without a dedicated framework for synthetic media, the country risks being unprepared for the next wave of disinformation. The question isn’t whether deepfakes will be used in future crises; it’s how ready India will be when they are. Can our laws evolve as fast as the lies we see on screen?
