A Waymo self-driving vehicle stopped for an Austin Independent School District school bus on a January morning, then drove past it while children were still boarding and the bus’s red lights and stop arm remained active. The failure happened despite a prior software recall meant to fix exactly this kind of behavior, raising hard questions about whether automated driving systems can reliably protect students in real school-zone conditions.
What Happened on East Oltorf Street
On January 12, 2026, at approximately 7:55 a.m. CST, a 2024 Jaguar I-Pace operating under Waymo’s fifth-generation Automated Driving System approached an Austin ISD school bus on E. Oltorf Street. The vehicle initially stopped, which means its sensors detected the bus. But it then proceeded forward and passed the school bus while red lights and stop arms were still active, according to the National Transportation Safety Board. Students were boarding at the time.
That sequence is the core problem. The system saw the bus, recognized enough to brake, and then made a decision to continue anyway. In Texas, as in every other state, passing a school bus with its stop arm extended and red lights flashing is illegal. A human driver who did this would face fines and potential criminal charges. The automated system did it while children stood feet from moving traffic.
Waymo vehicles are designed to operate without a human driver ready to intervene in the moment. When something goes wrong, there is no one in the front seat to override a bad decision. That reality magnifies the stakes of any misinterpretation of school-bus signals, the only line of defense is the software itself.
A Recall That Did Not Hold
This was not the first sign of trouble. Before the January incident, Waymo had already issued a software update to address school-bus stopping behavior. The National Highway Traffic Safety Administration recall database cataloged it as Recall No. 25E-084, covering the same fifth-generation ADS software running in the Jaguar I-Pace that failed on Oltorf Street. The recall specifically targeted the system’s ability to comply with school-bus stop laws.
The fact that a recall had already been filed and a fix deployed before the January 12 event is what makes this case distinct from a one-off software glitch. Waymo identified the risk, pushed an update, and the problem persisted. That pattern suggests the underlying issue is not a simple calibration error but something deeper in how the ADS interprets school-bus signals in real traffic.
Most coverage of autonomous vehicle incidents treats each event as isolated. But the recall, then the failure sequence here tells a different story. It points to a gap between controlled testing environments and the messy realities of urban school routes, where buses stop at varying intervals, lighting conditions shift, and children move unpredictably near the roadway.
It also raises questions about how safety regulators should treat software recalls. Traditional recalls remove or repair defective hardware. With automated driving systems, companies can push a software patch overnight, declare the issue resolved, and keep fleets on the road. The Oltorf Street incident shows that a recall on paper does not always translate to reliable behavior in the field.
The NTSB Investigation Takes Shape
The NTSB opened a formal investigation into the incident, designating it case number HWY26FH007. The agency titled the probe “Automated Driving System-Equipped Vehicle Passed School Bus Loading Student Passengers,” a description that leaves little ambiguity about what went wrong.
Docket filings associated with the case are accessible through the NTSB’s online docket search, though the investigation remains open and final findings have not been published. What the agency has confirmed so far establishes the basic facts: the vehicle was Waymo-operated, the bus belonged to Austin ISD, and the ADS did not maintain its stop as required by law.
Federal investigators typically examine vehicle telemetry, sensor logs, software version history, and environmental conditions in cases like this. For an ADS-equipped vehicle, the data trail is far more detailed than in a conventional crash investigation, because the system records every perception input and every decision output. That transparency cuts both ways. It means investigators can pinpoint exactly when and why the system chose to proceed, but it also means Waymo’s own logs will likely show the system had enough information to stay stopped and did not.
In addition to digital evidence, investigators often seek statements from the school bus driver, any nearby witnesses, and local law enforcement officers who responded. For a school-bus encounter, those accounts can clarify how close children were to the roadway, whether any student attempted to cross in front of the bus, and how much time passed between the bus activating its stop arm and the automated vehicle moving forward.
Why Software Fixes Keep Falling Short
Autonomous vehicles process school-bus encounters as a classification problem. The system must identify the bus, detect the stop arm position, read the light state, and then apply the correct behavioral rule: stop and remain stopped until the signals deactivate. Each of those steps involves a chain of sensor inputs, software logic, and decision thresholds.
The January incident suggests the Waymo system completed the first steps correctly. It recognized the bus and stopped. The breakdown came in the “remain stopped” phase. One plausible explanation is that the ADS treated the initial stop as sufficient compliance and then re-evaluated the scene, deciding conditions allowed it to proceed. That kind of logic might work at a standard traffic light, where the signal changes and forward movement is expected. It fails catastrophically at a school bus stop, where the only correct action is to wait until the stop arm retracts.
This distinction matters for anyone living or driving in cities where Waymo operates. A software patch can adjust detection thresholds or add new training data for school-bus configurations. But if the deeper decision architecture treats a school-bus stop the same way it treats other temporary traffic holds, patches will keep missing the mark. The recall filed under NHTSA No. 25E-084 addressed the symptom. The Oltorf Street incident suggests the root cause survived the fix.
Another challenge is variability. School buses differ in size, color accents, light placement, and stop-arm design across districts and states. Weather, glare, and occlusions from other vehicles can further complicate detection. Robust performance requires not just recognizing an idealized bus in clear daylight, but handling edge cases (partial views, crowded streets, and children approaching from between parked cars).
What This Means for Austin and Other Cities
Austin ISD buses operate on hundreds of routes across the city, making stops in residential neighborhoods, on busy arterials, and near intersections with complex traffic patterns. Every one of those stops requires surrounding vehicles to comply with the same law the Waymo system violated. Parents sending children to school expect that compliance to be absolute, not probabilistic.
The district’s involvement in this case is notable because, according to the NTSB investigation record, the bus in question was an Austin ISD vehicle. That means the district has direct standing in the federal investigation and a clear interest in how Waymo addresses the failure. Insufficient data exists in available sources to determine the specific nature of any training protocols or formal agreements between Austin ISD and Waymo prior to the recall, but the sequence of events—a recall targeting school-bus behavior followed by a school-bus violation involving a district bus—implies some level of prior coordination that did not fully translate into on-road safety.
For Austin, the incident poses immediate policy questions. City officials must decide whether to adjust operating permits, impose additional geofencing around school zones, or require real-time reporting of any school-bus interactions involving automated vehicles. Austin ISD, for its part, may seek clearer notification when autonomous fleets expand into new routes that overlap heavily with student pick-up and drop-off locations.
Other cities watching the case face similar choices. As automated vehicles move from pilot projects to regular service, local governments are being asked to accept assurances that software can handle complex, child-centered environments. The Oltorf Street failure suggests those assurances should be tested against concrete, verifiable performance in the most sensitive scenarios, not just average behavior across a fleet.
Ultimately, the question is not whether automated systems can outperform the worst human drivers—they already do in many metrics—but whether they can be trusted with the specific, high-consequence duty of protecting children near school buses. Until investigations like HWY26FH007 reach clear conclusions and companies demonstrate that post-recall behavior matches legal expectations, that trust will remain fragile.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.