Image Credit: Daniel Ramirez from Honolulu, USA - CC BY 2.0/Wiki Commons

Waymo is quietly turning its driverless cars into rolling AI chat pods, wiring Google’s Gemini models directly into the cabin so riders can talk to the vehicle as naturally as they would to a human driver. The experiment, which centers on a new in-car assistant for its robotaxi service, could redefine what passengers expect from autonomous rides, and it also raises fresh questions about data, safety, and who really controls the experience inside these cars.

Instead of a static touchscreen and a few canned voice prompts, Waymo is testing a conversational layer that sits on top of its self-driving stack, letting riders ask for route changes, local tips, or help with accessibility needs in real time. If this Gemini-powered assistant works as intended, it will not just make robotaxis friendlier, it will also give Waymo a powerful new channel for learning from riders and shaping how people interact with autonomous vehicles in the first place.

Waymo’s Gemini experiment moves from code to car

Waymo has been edging toward this moment for months, but the clearest signal came when a Code leak showed the company wiring a feature called “Ride Assistant” into its app, with explicit references to Gemini handling in-car conversations. The leak described how Waymo was preparing to give its robotaxis a voice, using Gemini as the brain behind a system that could answer questions, adjust settings, and respond to rider requests under the “Ride Assistant” label, which is a notable shift from the current, mostly button-driven interface riders see today in the Waymo One app. The internal references to Sep, Code, Waymo, Gemini, and Ride Assistant underscored that this was not a vague concept but a concrete product path tied to specific model names and features, as surfaced in the Ride Assistant code.

Separate reporting then confirmed that Waymo is now actively testing Gemini as an in-car AI assistant in its robotaxis, moving the idea from speculative feature flags into real-world trials with riders. In that coverage, Dec, Waymo, Gemini, Gennaro Cuofano, and Continu all appear together in a description of how the company is experimenting with a conversational agent that can sit between the passenger and the autonomous vehicle, mediating requests and providing information while the car drives itself. That framing makes clear that Waymo is not just bolting on a generic chatbot, it is building a dedicated in-vehicle assistant that understands the context of a ride, as highlighted in the analysis shared by Gennaro Cuofano.

From driving brain to cabin companion

Gemini is not new to Waymo’s stack, but until now it has mostly lived behind the scenes, helping the company train and refine the systems that actually drive the car. Earlier work described how Waymo used Oct, Gemini, End, and End Multimodal Model for Autonomous Driving to build what it called an “End-to-End Multimodal Model for Autonomous Driving,” effectively using Gemini to process video, sensor data, and language together so the car could better understand complex traffic scenes. That effort, which also referenced Gemin in the context of the underlying technology, positioned Gemini as a training and perception tool, not something riders would ever directly talk to, as detailed in the description of Waymo’s End Multimodal Model for Autonomous Driving.

The new in-car assistant flips that relationship, bringing Gemini out of the lab and into the cabin as a front-of-house presence that riders can interact with in natural language. Instead of only shaping how the car sees the world, Gemini now shapes how the passenger experiences the ride, answering questions about the route, explaining why the car is stopping, or even helping coordinate pickups and drop-offs. That shift from back-end model to cabin companion is significant, because it turns the robotaxi into a two-sided AI system, with one intelligence focused on driving and another, Gemini-based layer focused on conversation and service.

What the Gemini “Ride Assistant” is meant to do

At a functional level, the Gemini-powered Ride Assistant is designed to make the robotaxi feel less opaque and more responsive, especially in moments when riders might otherwise feel anxious or confused. The Code leak that surfaced the feature described how Waymo was preparing the assistant to handle spoken queries about the trip, respond to follow-up questions, and even gracefully answer and change the subject when a rider veered into topics that were not directly related to the ride. Those details, tied explicitly to Sep, Code, Waymo, Gemini, and Ride Assistant, suggest a system prompt tuned for in-car etiquette rather than open-ended chat, as indicated in the reporting that first exposed the Ride Assistant behavior.

Additional reporting on the experiment described how Autonomous driving leader Waymo has been testing a Gemini-based in-car assistant that can answer rider questions, help with trip details, and manage small talk, with the system explicitly instructed to answer and change the subject when conversations drift into sensitive territory. In that account, Dec, Autonomous, Waymo, and Goo are all named in connection with a trial that aims to improve the experience inside self-driving vehicles by giving passengers a conversational interface that feels more like a human driver, while still staying within safety and policy guardrails. The description of how the assistant is meant to answer and then redirect conversations comes directly from the system prompts referenced in the coverage of Autonomous Waymo’s Gemini experiment.

A new front in the robotaxi user experience war

Waymo’s move to embed Gemini in the cabin is also a competitive play, because the battle for robotaxi dominance is increasingly about rider experience, not just raw driving capability. By giving passengers a conversational assistant that can explain what the car is doing, suggest alternate routes, or help with accessibility needs, Waymo is trying to differentiate its service from rivals that still rely on static screens and limited voice prompts. The Dec reporting that described how it is testing a Gemini-based in-car assistant for its driverless ride-hailing service framed the feature as a way to improve the experience inside self-driving vehicles, explicitly linking Dec and Gemini to a strategy focused on comfort and trust rather than only technical performance, as laid out in the account of Waymo tapping a Gemini AI assistant for robotaxis.

In practical terms, that means the assistant could become the primary interface for everything from adjusting the cabin temperature in a Jaguar I-Pace to confirming a multi-stop route in a Chrysler Pacifica Hybrid, replacing a series of taps on a screen with a single spoken request. If Waymo can make that interaction feel reliable and transparent, it gains a powerful edge in markets where riders are still deciding whether they trust a car with no human at the wheel. The more the assistant can anticipate questions, explain maneuvers, and keep riders informed, the more likely those riders are to see the robotaxi as a service they can rely on every day rather than a novelty.

Data, cameras, and the privacy tradeoff

Bringing Gemini into the cabin also raises pointed questions about what data Waymo will collect from riders and how that information will be used. Earlier this year, interface researcher Wong surfaced evidence that Waymo may use interior camera data to train generative AI models and potentially support advertising, based on system prompts that described a feature that had not yet shipped in public builds. Those prompts, which explicitly referenced Apr and Wong, indicated that the system was designed to analyze what riders were doing inside the car, including their reactions and interactions, and then use that information to improve generative models and possibly target commercial content, as described in the report that Waymo may use interior camera data to train generative AI models.

Layer a conversational assistant on top of that, and the privacy calculus becomes even more complex, because the system is now listening to what riders say as well as watching what they do. The same prompts that describe using interior footage to refine generative models also mention high-stakes scenarios and trips riders have been on, which suggests that Waymo is at least exploring how to connect in-cabin behavior with trip history and AI training. If Gemini is handling the conversation, the company will need to be explicit about whether those voice interactions are stored, how they are anonymized, and whether they are used to train Goo’s broader Gemini models or kept within Waymo’s own domain. Without clear answers, the assistant risks feeling less like a helpful guide and more like a microphone riders did not fully consent to.

Safety, guardrails, and the limits of small talk

Safety is the other major axis where a Gemini-powered assistant could either strengthen or undermine trust in robotaxis. The system prompts described in the Dec coverage of Autonomous driving leader Waymo’s experiment make it clear that the assistant is instructed to answer and change the subject when conversations drift into areas that could be sensitive or distracting, which is a subtle but important safety feature. By limiting how deeply the assistant engages on topics unrelated to the ride, Waymo reduces the risk that a passenger will treat the AI as a therapist, financial adviser, or political pundit while the car is navigating complex traffic, a boundary that is explicitly encoded in the prompts referenced in the description of Autonomous Waymo’s Gemini-based assistant.

At the same time, the assistant has to be capable enough to handle high-stakes scenarios, such as a rider reporting a medical emergency, a safety concern about another road user, or confusion about an unexpected detour. The earlier system prompts that mentioned high-stakes scenarios and trips they have been on in the context of interior camera data show that Waymo is at least thinking about how AI systems should behave when something goes wrong inside the car. If Gemini is the first line of communication in those moments, it will need clear escalation paths to human support, unambiguous language about what it can and cannot do, and strict policies that prevent it from giving advice that could worsen the situation. The balance between friendly small talk and serious, reliable assistance will be one of the defining tests of whether riders accept this new layer of AI in their daily travel.

Why Waymo and Goo need each other here

The partnership between Waymo and Goo around Gemini is not just a matter of corporate convenience, it reflects a deeper alignment between a company that builds physical autonomy and one that builds large-scale language models. Waymo brings fleets of robotaxis, real-world driving data, and a commercial ride-hailing service, while Goo contributes Gemini, the infrastructure to train and deploy it, and the broader ecosystem of AI tools that can plug into the assistant. The earlier work on the End-to-End Multimodal Model for Autonomous Driving, which explicitly tied Oct, Gemini, End, End Multimodal Model for Autonomous Driving, and Gemin together, showed how Goo’s models could help Waymo interpret complex sensor data; the new in-car assistant extends that collaboration into the passenger experience, as documented in the description of Waymo’s use of Gemini for its autonomous driving model.

For Goo, having Gemini embedded in a commercial robotaxi service is a showcase for its AI platform, demonstrating that the model can handle not just web search or productivity tasks but also real-time, safety-critical conversations in a moving vehicle. For Waymo, using Gemini instead of building a bespoke language model from scratch lets it move faster and focus its engineering resources on the driving stack and fleet operations. The Dec reporting that described how it is testing a Gemini-based in-car assistant for its driverless ride-hailing service, explicitly linking Dec and Gemini to the in-cabin experience, underscores how central this collaboration has become to Waymo’s product roadmap, as reflected in the account of the company tapping a Gemini AI assistant for its robotaxis.

How riders could feel the difference on day one

If and when Waymo rolls this Gemini assistant out broadly, riders are likely to notice the change almost immediately, even if the underlying driving behavior stays exactly the same. Instead of tapping through menus to report an issue or request a stop, a passenger could simply say, “I need to pick up my friend two blocks ahead,” and let the assistant handle the details, with Gemini parsing the request and coordinating with the routing system. The Dec accounts that describe Waymo testing Gemini as an in-car AI assistant in its robotaxis, tying Dec, Waymo, Gemini, Gennaro Cuofano, and Continu together, emphasize that the goal is to create a more natural bridge between the rider and the autonomous vehicle, turning what used to be a series of discrete app actions into a fluid conversation, as outlined in the analysis of the in-car AI assistant.

Over time, that conversational layer could also help demystify the robotaxi’s behavior, especially in edge cases that currently unsettle some riders, such as sudden braking, unusual lane choices, or extended pauses at intersections. If the assistant can proactively explain, “I am waiting for a pedestrian who is partially hidden behind that parked truck,” or “I am taking a slightly longer route to avoid a collision risk ahead,” it could turn moments of uncertainty into opportunities to build trust. The Dec reporting that framed the Gemini-based assistant as a way to improve the experience inside self-driving vehicles, explicitly linking Dec and Gemini to comfort and clarity, hints at that longer-term ambition, as captured in the description of Waymo’s plan to enhance the experience inside self-driving vehicles.

More from MorningOverview