Morning Overview

Apple will let you replace Siri’s brain with Claude, Gemini, or any AI you want starting this fall

Right now, if you ask Siri a complex question on your iPhone, it can hand the query off to ChatGPT. By this fall, you may get to decide which AI answers instead. Apple is planning to open Siri to third-party AI models, including Anthropic’s Claude and potentially Google’s Gemini, as part of what is expected to be iOS 27, according to Bloomberg reporting from March 2026. The change would let users pick their preferred AI engine through a system-level setting, turning Siri from a single-source assistant into something closer to a switchboard.

If Apple follows through, it would be the clearest admission yet that the company has decided licensing external AI is a faster path to competitiveness than building its own frontier-class language model, and that it is willing to let rivals power the most personal interface on the iPhone rather than fall further behind.

Where this started

Apple already took a significant step in this direction. In December 2024, iOS 18.2 introduced an optional ChatGPT integration under the Apple Intelligence umbrella, allowing Siri to route certain queries to OpenAI’s model with user permission. That feature expanded in subsequent updates, but it was limited to a single external provider and felt more like a stopgap than a strategy.

Behind the scenes, Apple was weighing something far more ambitious. Bloomberg reported in mid-2025 that the company was exploring whether to replace Siri’s own large language models entirely with external ones from Anthropic and OpenAI. Sources inside Apple’s AI and software engineering groups described internal debates over whether to prioritize speed to market or continue investing in homegrown models that were not keeping pace with the competition.

The March 2026 Bloomberg report builds on that foundation. Rather than simply swapping one external model for another, Apple now appears to be designing a broader architecture where multiple AI providers can plug into Siri simultaneously, with users choosing which one handles their requests. The expected unveiling is Apple’s Worldwide Developers Conference in June 2026, with a consumer rollout alongside new iPhones in the fall.

What this would actually look like

Based on Bloomberg’s descriptions, the experience could resemble how iPhone users already choose a default browser or email app. A settings panel would let you designate Claude, ChatGPT, Gemini, or another supported model as the AI behind your voice queries, with the option to switch back to Apple’s built-in model at any time.

That is the vision, at least. The practical details remain thin. There is no public information about the API framework Apple would use to connect third-party models to Siri, how deeply those models would integrate with on-device features like Calendar, Contacts, and HomeKit, or whether Apple would impose limits on what external AI systems can access. Apple could offer a tiered approach where some queries stay on-device while others get routed to external clouds, but the reporting does not describe such a system in detail.

The privacy question Apple has not answered

This is where the plan gets complicated. Apple has spent years building its brand around on-device processing and data minimization. Routing Siri queries through servers operated by Anthropic, Google, or OpenAI would represent a meaningful departure from that promise, even if users opt in voluntarily.

Apple has not explained how it would handle this tension. Open questions include whether requests would be anonymized before leaving the device, what data retention policies partners would need to follow, and whether Apple would enforce strict contractual safeguards or simply disclose each provider’s own privacy terms at the point of selection.

Each external provider brings its own data handling practices and performance characteristics. Users who care most about confidentiality might gravitate toward providers that emphasize limited retention and strong encryption, while others might choose based on speed, creativity, or app compatibility. Without standardized disclosures or enforced baseline rules from Apple, the choice could be powerful but genuinely confusing for people who are not tracking the differences between AI companies.

What we still do not know

Several critical pieces are missing as of June 2026. Neither Bloomberg report specifies which AI providers beyond Anthropic and OpenAI have agreed to participate. Google’s Gemini is a logical candidate given its existing Android integration, but its inclusion has not been confirmed. The reported plan describes Apple’s intended architecture, not a finalized list of launch partners.

The business model is also unclear. According to Bloomberg’s March 2026 reporting, Apple’s existing arrangement with OpenAI involved no licensing fee, with OpenAI absorbing inference costs in exchange for distribution to hundreds of millions of iPhones and the opportunity to convert free users into paid subscribers. Whether the same structure would extend to additional providers, or whether Apple would charge a platform fee or negotiate distinct terms for each partner, has not been reported. The economics will determine which models can afford to participate and how prominently they appear inside Apple’s ecosystem.

Apple itself has made no official statement confirming the expected iOS 27 timeline, the supported models, or the scope of the integration. All reporting traces back to people familiar with the company’s plans, not to on-the-record executives or published documentation. The version number “iOS 27” comes from Bloomberg’s sources and has not been publicly confirmed by Apple. Plans at this stage can still shift before a formal announcement, and Apple has a history of delaying or scaling back ambitious software features between WWDC previews and general release.

Why Bloomberg’s reporting carries weight

Both key reports come from Bloomberg, which has a strong and well-established track record on Apple product leaks and supply chain intelligence. The consistency between the mid-2025 exploration story and the March 2026 expansion report strengthens the overall picture: Apple moved from testing whether third-party models could replace its own AI to designing a multi-provider platform in under a year.

That said, both reports rely on anonymous sourcing. No other major outlet has independently confirmed the details, which could mean Bloomberg’s sources are uniquely well-positioned or simply that other newsrooms have not caught up yet. The timeline and feature scope should be treated as highly plausible, not guaranteed.

How a Siri switchboard could reshape the assistant wars

If Apple turns Siri into a switchboard, the competition would no longer be Siri versus Alexa versus Google Assistant. It would be Claude versus Gemini versus ChatGPT versus whatever model comes next, all running on Apple hardware. Apple would keep control of the distribution channel, the operating system, and the interface while outsourcing the hardest technical problem to companies that have poured billions into training frontier AI systems.

There is also a regulatory dimension worth watching. The EU’s Digital Markets Act has pushed Apple to open up default apps and interoperability on iPhones. While the Siri changes appear to be driven by competitive pressure rather than regulatory mandate, a multi-provider AI framework would align neatly with the direction European regulators have been pushing.

For iPhone users, the practical stakes are conditional but significant. If Apple delivers on what Bloomberg has described, choosing the AI that powers your voice assistant could become as routine as picking your preferred browser. The best-performing AI systems in the world would be one settings toggle away, not locked behind separate apps. The first concrete test will come at WWDC in June 2026, when Apple is expected to show whether this switchboard vision is a polished product or still a work in progress.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.