Skip to main content

Command Palette

Search for a command to run...

Public by Default: when your private prompts become a showcase

Updated
4 min read
Public by Default: when your private prompts become a showcase

In the Cybersecurity world, paranoia is part of daily life.

We usually talk about flaws in terms of exploits, payloads, or bypasses. However, not all failures are due to bugs. Sometimes, it's the product design itself, which can be even more alarming.

While I was bored and curiously browsing around, I stumbled on an AI platform that offers several free but limited AI engines. The strange part is an intentional endpoint to discover, a feed that exposes other users queries and prompts like social media posts.

The catch..

By default every conversation is public. There is an option to set each chat as private, but since that is “not the default”, most users never change it. That small design choice means the majority of prompts remain exposed by default.

With real cases it becomes clear: what looks like a personal or harmless query can end up leaking photos, credentials, or sensitive situations. Defaults matter, and in this case the default setting is exactly what puts privacy at risk.

Real cases

Case 1: Personal relationship coaching with private photos

A user uploaded .png image, asking the AI to help him “bag this girl 100%” Both the photo and the request are exposed to the world.

Case 2: Academic dishonesty with multiple photos

Another prompt contained eight images of exam questions with the request to code the solution in C++. Not only an integrity issue but also private exam content, now public.

One of the most critical: a full curl request with Bearer token and cookies. Anyone could reuse that for unauthorized access. Classic example of sensitive credentials leaking by design.

Case 4: Facial rating and body shaming risk

A link to a ChatGPT share where a user asks for a full brutal breakdown of their looks: jawline, eyes, hair. A privacy nightmare turned into potential public shaming.

Case 5: Self image

Another photo uploaded by someone asking if their face was “Really bad?”.. This highlights how exposed and fragile people can be when they don't realize their content is public.

Case 6: Blog XML import request exposing admin emails

A user uploaded a Blogspot XML backup and asked for help importing posts and comments into WordPress. The file contained full post content, comments, dates, and also exposed administrator emails..

Case 7: Database configs and credentials in plain sight

Another case revealed PostgreSQL configuration (pg_hba.conf), .env variables including DB_USER and DB_PASS, and a Flask app snippet showing how the database is accessed.

Case 8: Feedback document exposing worker identity and internal training instructions

A user uploaded a .pdf file with instructions for using the AI as a personal assistant to maximize earnings. Inside the document, sensitive details were visible, including personal identifiers, worker IDs, country of origin, and internal training/probation guidelines. It even contained instructions on how to communicate with the company’s feedback department.

Impact: The document combines personally identifiable information with internal company processes. If exposed, this data could be used for impersonation, phishing, or targeted attacks against both the individual and the organization…

The problem isn't technical

This isn't an SQLi or XSS. It's worse: a leak by default design. When the default is “public,” these risks explode:

  1. Doxing: personal details exposed unintentionally.

  2. Scraping: easy to build massive datasets from prompts and images.

  3. Legal trouble: copyrighted, sensitive, or client data leaking.

  4. Emotional damage: users publishing personal photos or thoughts unknowingly.

Times are changing

Let's be real: today people use ChatGPT or other AI tools every single day, every single minute. It's part of work, school, dating, even daily routine. And with that constant flow of prompts, the line between private and public is thinner than ever.

This post isn't about exposing one site. It's about showing that with just a bit of casual browsing in the so called “legal internet,” you can stumble on free alternatives that expose all your data. It's the classic case of cheap comes expensive. Or worse, the careless use of a tool by someone who doesn't know their content is going straight into a showcase.

At the end of the day, it feels like private data doesn't matter as much anymore when the priority is to get fast answers or save a few bucks.

Conclusion

You don't need an exploit to break privacy. Sometimes bad design is enough, and in cases like this it even makes you wonder if the whole thing is just a scam hidden behind a shiny AI interface. Either way it's a reminder that defaults matter.

If you use these tools, treat every prompt as public unless you make it private yourself. Always check the small print, look for the settings, and don’t assume privacy is automatic. If you share sensitive information, wrap it in <REDACTED> tags or remove details that could expose you.

If you're building these tools, don’t leave privacy as an optional checkbox. Make private truly private, and make it the default.

Because privacy doesn’t only break with payloads, it breaks with careless product choices and with platforms that might care more about quick growth than protecting people.

Private Prompts: When You Go Public