Morning Overview

Use ChatGPT via Apple Intelligence to limit data sharing

Every time you type a prompt into ChatGPT, that text lands on OpenAI’s servers under OpenAI’s terms. But if you own an iPhone or iPad running iOS 18.2 or later, there is another route: sending that same prompt through Apple Intelligence, which places Apple’s own privacy infrastructure between you and OpenAI. The difference is not just theoretical. It changes what data leaves your device, who handles it, and under what conditions.

Here is what Apple has disclosed, where the gaps remain as of May 2026, and how to set things up so you share as little as possible.

How the integration actually works

Apple Intelligence splits AI tasks into two tiers. Simple requests, such as summarizing a paragraph or rewriting an email, run entirely on your device using Apple’s own compact language models. Nothing leaves your phone. More complex requests that exceed on-device capability get routed to what Apple calls Private Cloud Compute (PCC), a server-side system Apple built specifically to process queries without retaining user data afterward.

ChatGPT enters the picture only when a task requires capabilities beyond what Apple’s own models can handle. When that happens, PCC acts as a buffer: it forwards a stripped-down version of your request to OpenAI, rather than sending the raw prompt along with your Apple ID, device identifiers, or conversation history. Apple has stated publicly, including during its WWDC 2024 keynote, that requests routed to ChatGPT through this system are not stored by OpenAI and are not used to train OpenAI’s models.

A technical paper published by Apple’s machine learning research team adds another layer of detail. The paper states explicitly that no private Apple user data was included in the training data for Apple’s foundation models. That is a direct, attributable claim from Apple’s researchers, not a marketing line, and the kind of statement that would expose the company to regulatory action under the EU’s General Data Protection Regulation or similar laws if it proved false.

The paper also references research on efficient model training techniques that helped Apple build models small enough to run on-device. The privacy payoff is straightforward: the more tasks a phone can handle locally, the fewer queries ever need to travel to any cloud server, Apple’s or otherwise.

What Apple has not fully disclosed

Apple’s technical paper describes how its models were built. It is not a privacy policy, and it does not spell out every detail of the live data pipeline between PCC and OpenAI’s infrastructure. Several questions remain open:

  • Exact data fields transmitted. Apple says requests are sanitized before reaching OpenAI, but no public document lists precisely which metadata, session tokens, or contextual signals are stripped and which, if any, pass through.
  • OpenAI’s handling on its end. OpenAI has not published a separate disclosure explaining how it treats Apple-routed queries differently from direct ChatGPT traffic. Apple says the data is not stored or used for training, but OpenAI has not independently confirmed the operational details.
  • Independent verification. Apple invited external security researchers to inspect PCC through a Virtual Research Environment released in late 2024, and offered bug bounties for confirmed vulnerabilities. That is a meaningful transparency step. However, no full, independent audit report has been published as of May 2026, so the system’s real-world behavior under edge cases and adversarial conditions has not been publicly validated by a third party.
  • Signed-in vs. anonymous use. Apple lets you use ChatGPT through the integration without signing into an OpenAI account. If you do sign in (to access ChatGPT Plus features, for example), OpenAI’s standard data policies may apply to your session. Apple’s on-device prompt warns you about this, but the distinction is easy to miss.

None of these gaps are unusual for a product this new, but they matter. If PCC were to leak metadata or partial query content, Apple’s privacy promise would collapse, and the reputational and legal fallout would be severe. Apple clearly has strong institutional incentives to get this right. Still, trust-but-verify is the appropriate posture until outside auditors weigh in.

How to set it up and what to watch for

If you want to route ChatGPT queries through Apple Intelligence rather than using OpenAI’s standalone app, here is what to do:

  1. Update your device. You need an iPhone 15 Pro or later (or an iPad with an M1 chip or later) running iOS 18.2 or newer. Apple Intelligence is not available on older hardware.
  2. Enable Apple Intelligence. Go to Settings > Apple Intelligence & Siri and make sure Apple Intelligence is turned on.
  3. Turn on the ChatGPT extension. In the same settings menu, look for the ChatGPT toggle under the extensions or third-party integrations section. Enable it. This tells Siri and Writing Tools they can call on ChatGPT when Apple’s own models cannot handle a request.
  4. Decide whether to sign in. You can use the integration without an OpenAI account. If you skip the sign-in, Apple says your queries are processed anonymously. Signing in unlocks paid-tier features but may subject your session to OpenAI’s broader data policies.

Once enabled, Apple surfaces an on-screen prompt every time a request is about to leave PCC and reach ChatGPT. That prompt is your decision point. You can proceed, cancel, or rephrase the query to strip out details you would rather not share. Treat it as a gate, not a speed bump.

Practical habits that reduce exposure further

Routing through Apple Intelligence is a structural improvement over going directly to OpenAI, but it is not a substitute for common sense. A few habits make a measurable difference:

  • Strip identifying details from prompts. Avoid pasting full names, home addresses, account numbers, or medical information into any AI prompt unless you genuinely need the output to include them. The less sensitive data a query contains, the less damaging any hypothetical leak would be.
  • Use on-device features when they suffice. Apple’s own models handle summarization, proofreading, and simple Q&A without involving any cloud server. If the task does not require ChatGPT-level reasoning, keep it local.
  • Review the prompt before confirming. When the ChatGPT handoff screen appears, read what is about to be sent. If the preview includes context you did not intend to share, cancel and rephrase.
  • Compare with alternatives. Google’s Gemini Nano runs certain AI tasks on-device on Pixel phones, and Samsung’s Galaxy AI uses a similar hybrid approach. If you are choosing a phone partly on privacy grounds, compare how each manufacturer handles the on-device vs. cloud split.

Where this leaves privacy-conscious users

Apple Intelligence does not make ChatGPT privacy-proof. What it does is add a documented, architecturally enforced buffer between your raw data and OpenAI’s servers, backed by a company whose business model depends on maintaining its privacy reputation. That buffer is worth something, especially compared with the alternative of sending every prompt directly to OpenAI with no intermediary.

The remaining uncertainty is real but bounded. Apple has published technical details, opened its infrastructure to security researchers, and made public commitments that carry legal weight. What is still missing is a comprehensive, independent audit confirming that PCC performs as described under adversarial conditions, and a clearer joint disclosure from Apple and OpenAI about exactly how data flows between them during a live query.

Until those pieces arrive, the practical advice is straightforward: use the Apple Intelligence route instead of the standalone ChatGPT app, skip the OpenAI sign-in unless you need premium features, and keep sensitive details out of your prompts whenever you can. You will not eliminate every risk, but you will meaningfully narrow the window of exposure.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.