29 October

Your chatbot is not your friend: Welcome to the new data protection nightmare

Hello everyone,
I need to get this off my chest. We talk a lot about the amazing new possibilities offered by AI chatbots. How they help us to be more creative, work faster, maybe even save the world. But we also urgently need to talk about the downside. And I don’t mean fear of the Terminator, but something much more tangible: the complete loss of our digital privacy.


I’ve spent the last few weeks looking deeply into the current situation, reading documents and analysing media reports. And what I’ve found is, frankly, disturbing. The assumption that our conversations with ChatGPT, Claude and co. are confidential is a dangerous illusion. When I think back, I have also acted recklessly and uploaded private things that are not safe with AI. Let’s take a closer look at this.
The state is reading along: No joke, but a precedent


Perhaps the most shocking case is only a few days old. The US Department of Homeland Security (DHS) has forced OpenAI to reveal the identity of a ChatGPT user. [1] Yes, you read that correctly. A federal court ordered it. Investigators were tracking a suspect on a darknet forum. In a chat, he had casually mentioned that he uses ChatGPT and even shared a few of his (harmless) prompts. That was enough for the authorities to request OpenAI’s complete data: name, address, payment details and, above all, all other chat histories. [2]


This is a dam burst. Until now, we only knew of such requests from Google searches. Now it’s clear to everyone: our most private conversations with AI have become fair game for government investigations. Your prompts become a ‘digital fingerprint’. A criminal lawyer aptly called it ‘profiling at the level of DNA traces’. [2] I don’t even want to imagine what authorities (but also private individuals) could do with such personality profiles.


When private chats suddenly become public: The Grok fail
But it’s not just the state. Sometimes a single click is enough to share your most intimate thoughts with the whole world. That’s exactly what happened to users of Elon Musk’s chatbot ‘Grok’. The ‘Share’ button, which was actually intended to share a conversation with only one person, created a public URL. The result: hundreds of thousands of private chats were suddenly findable by anyone via Google search. [3] These included requests for creating secure passwords, medical diagnoses and detailed nutrition plans. One expert subsequently called AI chatbots a ‘privacy disaster in progress’. [3] I couldn’t agree more.


The small print: why your chatbot is exploiting you
The problem lies deep within the system. Most providers, led by Meta, Google and, more recently, Anthropic (Claude), use your conversations by default to train their models. [5] You have to actively dig through the settings to opt out. With Claude, it’s recently been the other way around: you have to actively agree that your data will notbe used (opt-in). If you forget to do this, your data will be stored for up to five years. [6]


Providers have a huge interest in collecting as much data as possible. They use it to improve their products and gain a competitive advantage. Our privacy falls by the wayside. Only the expensive business or enterprise versions offer contractually guaranteed protection mechanisms. For the average user, the rule is: you are the product.

What can you do now?
I don’t want to spread panic here, but first and foremost optimise my own use of AI and raise awareness of the reality. We must learn to use these tools responsibly. Here are a few concrete steps you can take right away:
1) Assume everything is public: Treat every chatbot as if you were speaking in a public place. Never enter sensitive personal information. No names, no addresses, no financial data, no health information.
2) Opt out of data use: Immediately go to the settings of every AI tool you use and opt out of your data being used for training purposes. Make this a routine for every new service.
3) Delete your history: Don’t rely on automatic deletion. Manually delete your chat history regularly.
4) Choose the right provider: For sensitive queries, you should switch to privacy-friendly alternatives such as Mistral AI (Le Chat) or even consider local LLMs running on your own computer.


Even stricter rules apply to organisations and companies. Here, enterprise versions with clear contractual guarantees are mandatory. So are clear usage guidelines and employee training. Anything else is grossly negligent.


The days of naive enthusiasm for technology are over. We must remain critical and reclaim control over our data. Because one thing is clear: your chatbot is not your friend, nor is it your therapist. It is a tool of a company that primarily pursues its own interests.
Stay vigilant,
Andi


Sources
[1] Forbes (2025, 20 October). OpenAI Ordered To Unmask ChatGPT User Behind 2 Prompts. Retrieved from https://www.forbes.com/sites/thomasbrewster/2025/10/20/openai-ordered-to-unmask-writer-of-prompts/
[2] Heise Online (2025, 27 October). Precedent: US authority identifies darknet admin with ChatGPT data. Retrieved from https://www.heise.de/news/Praezedenzfall-US-Behoerde-identifiziert-Darknet-Admin-mit-ChatGPT-Daten-10900127.html
[3] BBC (2025, 21 August). Hundreds of thousands of Grok chats exposed in Google results. Retrieved from https://www.bbc.com/news/articles/cdrkmk00jy0o
[4] Malwarebytes (2025, 10 October). Millions of (very) private chats exposed by two AI companion apps. Retrieved from https://www.malwarebytes.com/blog/news/2025/10/millions-of-very-private-chats-exposed-by-two-ai-companion-apps
[5] Privacy Analysis of AI Interactions: Recent Developments, Provider Comparison, and Risk Mitigation Strategies. (2025). Internal document.
[6] Expert Report: Privacy and Security Analysis of AI Interactions—Mitigation Strategies for Government Access Risks. (2025). Internal document.

Share this article:

Leave the first comment