
Google is rolling out a new kind of personalization that reaches deep into your private life, inviting its AI to look at your emails and photos so it can answer questions about your own history. The company pitches this as a way to turn years of messages and images into a kind of searchable memory, powered by its Gemini models and a feature it calls Personal Intelligence. The tradeoff is stark: more convenience in exchange for letting a powerful system analyze some of the most intimate data you store online.
Whether you should agree to that tradeoff depends on how you weigh the promised benefits against the risks of misinterpretation, data exposure, and creeping scope. I see three questions that matter most: what exactly Google is doing with your Gmail and Photos, how much control you actually have, and how this kind of “personal AI” might reshape expectations of privacy far beyond your inbox.
What Google’s AI Mode really does with your Gmail and Photos
Google’s new AI Mode is designed to sit on top of services like Gmail and Google Photos and answer questions about your own life, from travel plans to family events. When you enable Personal Intelligence in this Mode, it can scan your inbox and image library to surface details like flight confirmations, receipts, or who attended a birthday party, using the company’s Gemini 3 model to generate tailored responses. Google has framed this as an evolution of its long‑running effort to organize personal information, now with generative tools that can summarize, reason and respond in natural language across Gmail and Photos.
Under the hood, AI Mode uses Gemini 3 to process your content in real time rather than folding your inbox or photo library directly into a global training set. Reporting on the feature notes that the system does not train directly on the user’s Gmail inbox or Google Photos library, and instead processes data from these sources to generate answers while keeping it tied to your account. The company says this Personal Intelligence can improve results when searching for activities, trips or purchases and is meant to get better over time based on the model’s responses, which is why it stresses that AI Mode uses Gemini 3 rather than older systems.
How Gemini touches your photos and what “privacy” means in practice
Google has started weaving Gemini directly into Photos, promising smarter search and automatic organization that can recognize people, places and events across your library. In its own documentation, the company describes how Gemini features in Photos rely on your personal data in Google Photos to generate suggestions and insights, while emphasizing that this data is kept private and subject to specific privacy notices. The privacy hub for these tools explains how Gemini features in Photos work and how your personal data in Google Photos is handled when you use them.
Digging into the Gemini features in Photos privacy notice, Google says it uses your images and associated metadata to power things like automatic albums and search, and it outlines how that data is stored and processed. The notice explains how data is kept private when using Gemini features in Photos and spells out how your personal data in Google Photos is separated from broader model training, at least for the general Gemini chatbot. The company stresses that these Gemini features in Photos privacy are governed by specific controls, but the reality is that enabling them still means letting an AI system analyze the faces, locations and moments that define your life.
Is your data training Google’s AI, or just powering your own assistant?
The line between “using data to help you” and “using data to train AI” is exactly where many people get uneasy, and Google’s own messaging has not always helped. A fact‑check of a Gemini permission pop‑up showed a screenshot where the app asked to summarize an email and then requested permission to use that data to train AI, explicitly tying personal content to model improvement. That screenshot, captured in a report on whether your data is helping train AI tools, showed a Gemini prompt that raised the question of how often personal content is used to train AI.
At the same time, Google has insisted that Gmail does not feed your emails into the public training set for its general Gemini model, the one used by the chatbot. A detailed correction explained that Gmail is reading your emails and attachments to train its AI unless you turn it off, but that it typically does not feed your emails into the public training set and that your data is isolated from other users. Another report put it bluntly, stating that Google says Gmail does not use your emails to train its AI after all, underscoring that the company is trying to draw a distinction between account‑level processing and global model training. The result is a nuanced picture in which Gmail content can be used to improve features for your account, while Google says Gmail doesn not feed those emails into a shared training pool, and where a separate explanation notes that Gemini typically does not feed your emails into the public training set and that your data is isolated when Gemini processes them.
Opt‑in promises, opt‑out realities, and the controls you actually have
Google has been careful to describe AI Mode personalization as opt‑in, stressing that users must grant permission before the system can access data from other Google products. In its own developer‑facing explanations, the company emphasizes that personalization in AI Mode is opt‑in, that users grant permission for AI to access their data from other services, and that the feature is supposed to respect user preferences regarding data usage. That framing is meant to reassure people that they are in control when they decide whether to let Google Mode look at their Gmail and Photos.
In practice, though, many users are encountering AI features as something to turn off rather than something they consciously turned on. Guides on how to disable Google’s AI training for Gmail, Chat and Meet walk people through opening Gmail on desktop or via the iPhone app, tapping the gear icon, and changing data usage settings before hitting “Save Changes.” Another step‑by‑step explanation on how to turn off Gmail AI training tells you to open Gmail, click Settings, adjust the relevant controls and save changes if prompted, underscoring that the default may not match your comfort level. Even on community forums, users share instructions like “Open Gmail and go to Settings, select your account and disable the AI features,” reflecting a grassroots push to reclaim control from Gmail and other tools that feel increasingly automated. For those who want to go further, one widely shared guide on how to turn off Google’s AI training for Gmail, Chat and Meet explains that you can tap or click the gear, change the relevant privacy options, and then click “Save Changes” to limit how Google Gmail uses your messages, while another walkthrough on how to turn off Gmail AI training notes that you can open Gmail, adjust the setting labeled Gmail AI training in a section marked Step 1, and that you, the user, must confirm the change and save it if prompted to fully disable Gmail AI.
Human reviewers, privacy hubs, and the fine print you should read
Even when Google says its models are not training directly on your inbox or photo library, there is another layer to consider: human review. In its Gemini Apps Privacy Hub, the company explains how human review helps improve Google services and helps protect Google, its users and the public, noting that a subset of chats are reviewed to refine systems and catch abuse. That means that when you interact with Gemini, some of your conversations can be seen by people inside or working with Google, which is a crucial detail if you are thinking about letting the same family of models read your emails or analyze your photos.
More from Morning Overview