OpenAI faces growing pressure from European regulators to give ChatGPT users clearer control over their stored conversations and personal data. The European Data Protection Board published a report summarizing coordinated work by EU data-protection authorities, raising questions about the lawful basis for processing personal information and the transparency of retention practices. For people who use ChatGPT regularly, the practical question is straightforward: how do you actually delete what the chatbot knows about you, and what are the limits of those controls?
What ChatGPT Stores and Why It Matters
Every prompt typed into ChatGPT can be retained by OpenAI for multiple purposes, including service improvement and model training. By default, conversations are saved to a user’s history and may be reviewed by OpenAI staff or used to refine future versions of the AI. That means personal details shared in a chat, whether a home address pasted for a cover letter or a medical question, can persist on OpenAI’s servers well after the browser tab is closed. Any stored text that contains identifying information can become an exposure point in the event of a data breach or unauthorized access.
The distinction between “chat history” and “training data” is one that many users miss. Clearing visible conversation logs from the ChatGPT interface removes them from a user’s account view, but it does not necessarily remove those inputs from the datasets OpenAI has already compiled for model training. This gap between what users see and what the company retains is exactly the kind of transparency concern that EU regulators have flagged. The ChatGPT taskforce report specifically addresses lawful basis, transparency obligations, and user rights such as access, erasure, and objection, all of which shape how OpenAI must handle stored data within the European Union.
Step-by-Step: Clearing Your Chat History
Removing saved conversations from ChatGPT takes only a few clicks, though the process differs slightly between the web interface and the mobile app. On the web, users should log into their OpenAI account, click the profile icon in the lower-left corner, and open Settings. From there, selecting “General” reveals the option to clear all chats. On the iOS or Android app, the path runs through the three-line menu icon, then Settings, then “Clear conversations.” Individual chats can also be deleted one at a time by hovering over a conversation in the sidebar and clicking the trash icon.
Deleting chats removes them from the user-facing interface, but users who want to go further should also adjust the Data Controls settings. Inside the same Settings menu, toggling off “Chat history and training” prevents new conversations from being saved to history and, according to OpenAI’s documentation, stops those inputs from being used to train future models. Conversations started while this toggle is off may be retained for up to 30 days for abuse monitoring and then deleted. This is one of the most effective steps a privacy-conscious user can take without abandoning the platform entirely.
Requesting Full Data Deletion Under GDPR
For users in the European Union, clearing chat history through the interface is only part of the picture. EU residents have the legal right to request that OpenAI erase personal data it holds about them, a right grounded in the General Data Protection Regulation’s provisions on erasure and objection. The EDPB’s coordinated work addresses how these rights apply in the ChatGPT context, and the taskforce report examines how OpenAI’s practices align with requirements around lawful basis and data-subject access. Users can submit a formal deletion request through OpenAI’s available privacy channels, including any in-account data export and deletion tools.
The practical effect of a GDPR erasure request can go beyond what the in-app deletion toggle achieves. A formal request generally requires a company to address personal data in its systems, not just remove chats from the interface. That said, fully purging specific data from trained models can be technically difficult, since individual inputs are blended into statistical patterns during training. This tension between legal obligations and technical feasibility is one reason the EDPB taskforce examined the topic closely. Users outside the EU may have fewer enforceable rights, though OpenAI describes some deletion options more broadly in its privacy materials. The gap between jurisdictions means that feature availability and response times can vary depending on where a user is located.
Limits of User Controls and Ongoing Gaps
Even with every available toggle switched off, users cannot fully control what ChatGPT retains. If conversations are retained for abuse monitoring, they are not truly ephemeral. And once data has been incorporated into a trained model, extracting or deleting a specific user’s contribution is generally not possible with standard machine-learning techniques. OpenAI has introduced features like temporary chats, which are designed to leave no persistent record, but independent verification of how these chats are handled may be limited.
A common blind spot in the current conversation around AI privacy is the assumption that clearing history is equivalent to erasing influence. A user who shares sensitive financial details in a prompt and then deletes the chat has removed the visible record but may have already contributed to the statistical fabric of the next model update, assuming chat history and training were enabled at the time. The most effective privacy strategy is preventive: avoid sharing identifying or sensitive information in any AI chat, regardless of the platform’s data controls. No deletion tool can undo what a model has already learned from aggregated training data.
Regulatory Pressure and What Changes Next
The EDPB’s taskforce report is not a one-off inquiry. It represents a coordinated effort across multiple national data-protection authorities to hold AI providers to the same standards that apply to any company processing personal data in the EU. The report’s focus on lawful basis and transparency signals that regulators expect OpenAI to demonstrate, not just assert, that its data practices comply with GDPR. If enforcement actions follow, they could force changes to how ChatGPT handles data retention globally, since companies often standardize privacy features rather than maintain separate systems for each jurisdiction.
Outside Europe, regulatory frameworks can be less comprehensive. The United States has no single federal equivalent to GDPR, and approaches to AI-related data practices vary. This regulatory asymmetry can leave many ChatGPT users with weaker formal rights and fewer avenues to challenge how their data is used. In practice, that makes the tools and commitments OpenAI has deployed—such as history controls, data export options, and the ability to object to certain processing—especially important for people who cannot rely on strong national privacy laws to back them up.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.