When most people interact with AI, whether they’re typing a prompt or generating images, they assume a certain level of privacy. It feels like a conversation between you and the AI only. However, a recent report from Wired should make everyone think twice.
Security researcher Jeremiah Fowler discovered an unprotected database belonging to South Korean AI company GenNomis, which contained over 95,000 files, many of which were explicit, some of which were likely illegal.The database revealed exactly what people had been generating with the company’s AI tool, and it was disturbing: non-consensual explicit imagery, deepfakes, and what appeared to be child sexual abuse material (CSAM).GenNomis swiftly locked down the database once contacted, but the damage had already been done.
Assumptions about AI safety(Image credit: Future)This story is alarming for several reasons, but especially for what it reveals about AI safety and user assumptions. Many people use generative AI tools as if they’re personal assistants or private sketchbooks.Some use them to brainstorm business ideas, write personal reflections, or even confess secrets, though the latter is among the top things you should never share with a chatbot.
But what if those prompts are stored? What if they’re accessible, not just to developers or internal teams, but potentially to hackers or researchers?The GenNomis case isn’t an isolated incident. In fact, it highlights a much broader issue: our flawed perception of privacy regarding AI.Many users still believe their conversations and creations with AI are private, when in reality, that data is often stored, reviewed, and in some cases, left vulnerable.
Major platforms like OpenAI’s ChatGPT or Google Gemini collect user inputs for system training and improvement. However, to be clear, your conversations are not used to train OpenAI models by default.You can check or change this setting by going to Settings > Data Controls > Chat History & Training.
If this is turned off, your conversations are not stored or used for training.If you leave it on, then your chats may be reviewed by OpenAI to improve model performance, but even then, it’s done in a way that protects privacy (e.g.
, removing personal identifiers).So, unless you explicitly allow your data to be used (by keeping chat history & training on), your chats are not used to train ChatGPT.Why opting out isn't foolproof(Image credit: Future)As long as your input is being transmitted and stored through cloud infrastructure, there’s always a risk of exposure, whether through human error, system breach, or intentional misuse.
There are also real consequences to this. As seen in the GenNomis case, when AI data is not secured correctly, it doesn’t just represent a potential privacy violation — it can become a repository of harm.From revenge porn and deepfakes to violent or illegal content, what users feed into these models can have ripple effects far beyond the screen.
Here are a few important things to keep in mind:AI prompts are not private by default. Unless you’re using a fully local or encrypted tool, assume what you write could be stored.Sensitive content should stay offline.
Avoid sharing anything personal, confidential, or legally sensitive with AI tools.Your AI interactions can be part of future training data. Even if anonymized, your ideas or phrases might resurface in unexpected ways.
Transparency varies. Not all AI companies disclose how long they keep your data or what they do with it.A breach doesn’t have to happen for harm to occur.
Internal misuse or poor moderation standards can be just as dangerous.Questions to ask yourself when using AI(Image credit: Shutterstock)This doesn’t mean you should stop using AI altogether. It means you should treat it like the powerful (and fallible) tool it is.
Ask yourself:Would I want this information to become public?Is this something I’d be comfortable putting in an email or on social media?Could this data be misused if it fell into the wrong hands?AI is incredibly helpful — until it’s horrifying. The GenNomis breach serves as a chilling reminder that behind every prompt is a record, and behind every AI engine is a company (or multiple companies) managing your data. Before you type, consider where your words might end up.
Ultimately, the safest approach is simple: if it’s something you’d never want exposed, don’t share it with AI.More from Tom's GuideOpenAI delaying GPT-5 launch 'for a few months' — here's what we're getting insteadI told ChatGPT something that I still regret — here's 7 things you should never shareGoogle just supercharged search with AI Mode and Lens integration — what you need to knowBack to the Future was released 40 years ago — here's all the AI they predicted that we have (and what they missed).
Technology
AI image generator’s data leak exposed thousands of prompts — and it’s a wake-up call for anyone using AI tools

A shocking AI data leak revealed thousands of explicit user prompts — proving your chats with AI may not be as private as you think.