Here's some advice that should be obvious but apparently isn't: maybe don't tell random AI chatbots your Social Security number. Harsh Varshney, who's worked on privacy and security for Google's Chrome AI team, is out with a reminder that treating AI chatbots like your trusted confidant might not be the smartest move.
Your AI Chat History Isn't Actually Private
Varshney's core message is refreshingly simple. "AI models use data to generate helpful responses," he told Business Insider. Translation: that helpful bot remembering your preferences? It's storing what you tell it, and "we users need to protect our private information so that harmful entities, like cybercriminals and data brokers, can't access it."
Think of public AI tools like postcards rather than sealed envelopes. You wouldn't write your credit card number on a postcard, so why tell it to a free chatbot? Varshney specifically warns against sharing Social Security numbers, credit card information, home addresses, or medical records with public AI platforms that might repurpose your data for future model training.
Even Enterprise Tools Have Long Memories
For work-related conversations, the engineer recommends enterprise-grade AI tools over consumer versions. But even those come with surprises. "Once, I was surprised that an enterprise Gemini chatbot was able to tell me my exact address," Varshney said, highlighting how long-term memory features can retain information you shared weeks or months ago.
The solution? Delete your chat history regularly and lean on temporary or incognito modes when possible. It's basic digital hygiene, but apparently worth repeating as AI becomes woven into daily workflows.
Varshney also suggests sticking with well-known AI platforms and actually bothering to review those privacy settings everyone ignores. Many services let you opt out of having your conversations used for training data, which seems like a worthwhile box to check.
Not All AI Platforms Handle Your Data Equally
If you're wondering which chatbots are actually respecting your privacy, earlier this year a report from Incogni ranked the major players. Mistral AI's Le Chat came out on top for data protection, followed by ChatGPT and Grok, thanks to clearer privacy policies and opt-out options.
On the other end of the spectrum? Meta Platforms Inc. (META) Meta AI, Alphabet Inc. (GOOGL) (GOOG) Gemini, and Microsoft Corp. (MSFT) Copilot were flagged as the most aggressive data collectors, often with less transparency about what they're actually doing with your information.
The mobile app analysis told a similar story. Le Chat, Pi AI, and ChatGPT posed the lowest privacy risks, while Meta AI was particularly hungry for sensitive data like email addresses and location information.
The bottom line from security experts? Review your privacy settings, assume anything you share could theoretically be seen by others, and save the truly sensitive stuff for old-fashioned phone calls or encrypted messaging apps. Your future self will thank you.




