Open ChatGPT, click your profile icon, and select Settings > Data Controls. What you find there, a handful of toggles governing whether OpenAI can train on your conversations, store a running memory of your preferences, and retain your chat history, is the closest thing you have to a privacy dashboard for one of the most widely used AI tools on the planet. Most of the roughly 100 million weekly users have never visited that screen. If you are one of them, now is a good time to look.
Regulators in the United States and Europe have spent the past several years pressing AI companies on a deceptively simple question: when a user deletes a conversation, does the data actually disappear? The answers so far suggest that erasing a chat log does not necessarily erase its influence on the model behind it. For anyone who has brainstormed a business plan, discussed a medical concern, or vented about a colleague inside a chatbot window, that gap between deleting a conversation and removing its traces from a trained algorithm is the privacy problem worth understanding right now.
The regulatory trail so far
Two enforcement patterns anchor the current understanding of AI data deletion. The first comes from the U.S. Federal Trade Commission, which has repeatedly ordered companies to destroy not just improperly collected data but also the algorithms trained on it, a remedy the agency calls “algorithmic disgorgement.”
The landmark case arrived in January 2021, when the FTC settled with Everalbum, the developer of a photo storage app that had used customers’ uploaded images to train facial recognition without adequate consent. The FTC’s enforcement against Everalbum required the company to delete the raw photos and any models or algorithms derived from them. That distinction was pivotal: the agency treated a trained algorithm as an extension of the personal information it absorbed, not as a separate corporate asset that could survive a data purge.
The FTC has since applied the same logic elsewhere. In 2022, it ordered WW International (formerly Weight Watchers) to destroy algorithms its Kurbo app had built using children’s data collected without parental consent. In late 2023, the agency’s settlement with Rite Aid over a flawed facial recognition surveillance program included similar algorithmic deletion requirements. Taken together, these cases establish a clear pattern: if a company trains a model on data it should never have used, U.S. regulators expect the resulting model itself to be dismantled.
On the European side, the European Data Protection Board assembled a dedicated ChatGPT Taskforce in 2023 and published its taskforce report examining how the chatbot’s data processing aligns with the General Data Protection Regulation. The report flags concerns about the legal basis OpenAI uses to justify training on personal data, the adequacy of user-facing controls like opt-outs and memory settings, and the difficulty of honoring data subject access requests when information is embedded in model weights rather than stored in a searchable database.
Meanwhile, the EU AI Act, which entered into force in August 2024 with compliance deadlines phased through 2026, adds another layer. It imposes transparency and data governance obligations on providers of general-purpose AI models, including requirements around training data documentation. National data protection authorities in Italy, Poland, and Spain have opened their own investigations into OpenAI, though none had reached a final public ruling as of early 2026.
What remains uncertain
Neither the FTC’s disgorgement orders nor the EDPB report answers the question users care about most: once a conversation has been folded into ChatGPT’s training data, can its influence on the model be fully reversed?
The Everalbum and Rite Aid cases required outright deletion of algorithms, but those remedies applied to relatively contained systems, facial recognition models trained on specific image sets. Large language models are orders of magnitude more complex. No public audit has demonstrated whether OpenAI or any comparable company can isolate and remove the statistical weight of a single user’s contributions from a model trained on billions of data points. Academic researchers are exploring techniques collectively known as “machine unlearning,” which aim to selectively erase the influence of specific training examples without retraining an entire model from scratch. As of spring 2026, these methods remain experimental and have not been validated at the scale of a production LLM.
OpenAI’s own privacy policy states that conversation data may be retained for up to 30 days after a user deletes it, primarily for safety and abuse monitoring, before being permanently removed from active systems. The company also offers a formal data subject access request process and a data export tool. But “removed from active systems” is not the same as “removed from a trained model,” and OpenAI has not publicly detailed how, or whether, it purges training influence after the fact.
No official U.S. regulatory guidance exists that specifically addresses conversational AI systems like ChatGPT. The FTC’s algorithmic disgorgement precedent remains an indirect analogy drawn from different product categories. Whether the same remedy could be applied to a general-purpose language model has not been tested in court or through a formal enforcement action targeting OpenAI.
How to read the evidence
The primary documents available, FTC press releases and the EDPB taskforce report, are institutional sources produced by the regulators themselves. They carry high credibility for the specific claims they make: the FTC confirms settlement terms and the deletion-of-algorithms remedy, while the EDPB confirms that a coordinated European review of ChatGPT’s data practices took place and identified concrete risk areas.
What these sources do not provide is empirical evidence about ChatGPT’s internal data handling. No independent technical audit, peer-reviewed study, or whistleblower disclosure has surfaced to confirm or deny whether deleting a chat and disabling memory in ChatGPT’s settings actually removes a user’s footprint from the model. Journalistic experiments and user-reported anecdotes circulate online, but they lack the methodological rigor to serve as proof.
The FTC’s disgorgement cases are the strongest available signal that U.S. regulators believe deletion obligations extend to trained models. But applying that logic to a system like ChatGPT would require a new enforcement action, and no such case has been filed. The EDPB report is best understood as a diagnostic document, not a verdict. It identifies risk areas and frames questions for national regulators to pursue rather than imposing binding requirements on OpenAI directly.
Step-by-step: tighten your ChatGPT privacy settings
For anyone who wants to act now rather than wait for regulators to catch up, here is a concrete walkthrough. These steps apply to ChatGPT on the web; the mobile app follows a similar path through its settings menu.
- Open Data Controls. Click your profile icon in the bottom-left corner of ChatGPT, select Settings, then choose Data Controls.
- Disable training on your chats. Toggle off “Improve the model for everyone.” This prevents future conversations from being used to train OpenAI’s models. It does not retroactively remove data from past training runs.
- Turn off Memory. If the Memory feature is enabled, ChatGPT stores facts about you across sessions (your name, job, preferences). Go to Settings > Personalization > Memory and switch it off. You can also click “Manage Memory” to review and delete individual stored facts before disabling the feature entirely.
- Clear your chat history. In Settings > Data Controls, select “Delete all chats.” This removes conversations from your account interface. Per OpenAI’s privacy policy, deleted data may be retained in backend systems for up to 30 days before permanent removal.
- Review Custom Instructions. If you have saved custom instructions or a system prompt that contains personal details (your location, profession, writing style preferences), edit or remove anything you would not want associated with your account.
- Submit a data deletion request. For a more thorough approach, use OpenAI’s privacy request portal to submit a formal data subject access or deletion request. European users can invoke their GDPR rights through this channel; users elsewhere can reference applicable local privacy laws.
- Export your data first (optional). Before deleting anything, you can request a data export through Settings > Data Controls > Export Data. OpenAI will email you a downloadable file containing your conversation history and account details.
These actions limit future exposure. They cannot undo past training, but they meaningfully reduce the amount of personal context OpenAI collects going forward.
Smarter habits for every session
Settings alone are not enough. What you type matters as much as which toggles you flip.
Treat a chatbot like any other cloud service: avoid entering full legal names of third parties, financial account numbers, medical diagnoses tied to identifiable details, or confidential work product you are not authorized to disclose. Where possible, redact or generalize specifics. Describe a “health concern” instead of naming a rare condition. Reference a “client in the retail sector” instead of a particular company. Use placeholder names. These habits reduce the risk that sensitive information becomes entangled with a model’s training data in the first place.
Organizations deploying ChatGPT or similar tools in professional settings face a different calculus. They may need to conduct internal risk assessments, restrict which teams can use external AI systems, or negotiate enterprise agreements with OpenAI that include contractual terms addressing data retention and training exclusions. In regulated sectors such as health care, finance, or education, relying on general consumer privacy settings is unlikely to satisfy legal obligations. OpenAI’s enterprise and team tiers already offer stronger data isolation guarantees, including a commitment not to train on business data, but those terms should be verified against your organization’s specific compliance requirements.
Why this will keep evolving
The broader lesson from both the FTC and the EDPB is that data deletion in the age of machine learning is not a one-click act. It is a process that must reach from front-end interfaces, where users see and manage their histories, all the way down into training datasets, model weights, and derivative products. Regulators are beginning to articulate that expectation. They have not yet demonstrated, in public and verifiable ways, how it will be enforced against large, general-purpose AI systems.
The technical research community is working on the problem. Machine unlearning papers have multiplied since 2023, and several major AI labs have acknowledged the need for selective data removal capabilities. But peer-reviewed, production-scale solutions do not exist yet, and until they do, every privacy promise from an AI company carries an asterisk.
For now, the evidence supports a cautious but practical stance. Users have meaningful tools to curb how their future conversations are used, and regulators have signaled that trained models are not beyond the reach of privacy law. What remains missing is the technical and legal machinery to connect those points at scale: to verify that when someone asks for their data to be deleted, the request echoes not just through visible chat logs but through the model layers that learned from them. Until that machinery exists and is tested, the smartest move is to control what you can, starting with the settings screen most people have never opened.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.