Vladimir Srajber/Pexels

Federal auto safety regulators are logging a growing trail of complaints that Tesla’s most advanced driver-assistance system is blowing through red lights and drifting across lane lines, sharpening questions about how safely the technology behaves in everyday traffic. The pattern of reports is pushing a once-hyped feature into the center of a high-stakes safety and regulatory fight that now stretches from Washington to local police departments and courtrooms.

As the National Highway Traffic Safety Administration digs into how Tesla’s Full Self-Driving behaves on public roads, the emerging record of alleged traffic violations is forcing a more basic reckoning over what drivers can reasonably expect from software that still legally requires human supervision. I see a widening gap between the marketing promise of near-autonomy and the messy reality of a system that federal investigators now link to dozens of incidents involving traffic control failures and lane control errors.

Regulators tally at least 80 reports of FSD traffic control failures

The most striking development is the sheer number of complaints now tied directly to Tesla’s most capable driver-assistance package. The National Highway Traffic Safety Administration has identified at least 80 instances in which Tesla’s Full Self-Driving allegedly ran red lights or crossed lane markings in ways that drivers say they did not anticipate or could not easily correct. That figure is not a casual tally from social media, it is a formal count inside an active federal safety probe, and it signals that what once looked like isolated anecdotes is now being treated as a systemic pattern.

Those 80 complaints cover a spectrum of behavior that would be troubling even if a human driver were behind the wheel, from entering intersections against a solid red to veering out of a designated lane while traffic is moving at highway speeds. The fact that drivers reported these incidents while relying on a branded “Full” self-driving feature raises the stakes for The National Highway Traffic Safety Administration, which is tasked with deciding whether the software’s design or execution creates an unreasonable risk to the public. I read that number as a pivot point: it is large enough to move the conversation from “edge cases” to a question of whether the system’s core decision-making is robust enough for the complexity of U.S. roads.

Inside the Office of Defects Investigation’s widening Tesla case

Behind those complaint counts sits a specialized unit with a very specific mandate. The federal safety agency’s Office of Defects Investigation, known as ODI, is probing whether Tesla’s driver assistance software has a defect that makes it prone to breaking basic traffic rules. ODI’s job is not to referee marketing language or social media debates, it is to determine whether a safety-related defect exists and, if so, whether it warrants a recall or other corrective action. The fact that ODI is now focused on how the system handles red lights and lane boundaries shows that investigators see potential problems not just in crash outcomes but in the way the software interacts with the rules of the road.

In practical terms, an ODI probe means Tesla must turn over detailed data about how its vehicles behave when Full Self-Driving is active, including logs from the moments before and after alleged violations. It also means regulators are comparing those data traces with driver complaints to see whether the software is making the same kinds of mistakes across different vehicles, locations, and software versions. When ODI zeroes in on patterns like repeated lane departures or failures to stop at signals, it is looking for evidence that the issue is baked into the system’s logic rather than the product of a few inattentive drivers, which is why this phase of the investigation is so consequential for Tesla.

From red lights to lane jumps, what drivers say FSD is doing on the road

Strip away the acronyms and legal language, and the core allegations are easy to visualize. Drivers describe Tesla’s Full Self-Driving approaching intersections at city speeds and then rolling or even accelerating through solid red lights, behavior that would earn any human a ticket and potentially cause a serious crash. Others recount the system drifting across painted lane lines on multi-lane roads, sometimes nudging toward adjacent traffic or exit ramps that the driver did not intend to take. These are not exotic edge cases like unmarked construction zones, they are the bread-and-butter scenarios of daily commuting.

Some of the most detailed accounts have been collected in a cluster of safety complaints that focus specifically on traffic light behavior. In those reports, owners say the system misreads or ignores signals, including protected left-turn arrows and standard red indications, in ways that force them to intervene abruptly. That pattern is reflected in coverage that pulls together multiple driver narratives under the banner of Tesla FSD Safety Complaints Detail Traffic Light Violations, which highlights how often the alleged misbehavior involves basic signal recognition rather than obscure corner cases. When I look across those accounts, what stands out is not just the number of incidents but the consistency of the themes: traffic lights and lane discipline, the two pillars of safe navigation, keep showing up as weak points.

Federal collision probe puts Tesla’s FSD under sharper scrutiny

The complaints about red-light runs and lane jumps are not happening in a vacuum, they are feeding into a broader federal investigation that already links Tesla’s software to real-world crashes. Earlier this year, regulators opened a new auto safety probe after reports that Tesla’s FSD ran red lights and contributed to collisions, putting the company’s most advanced driver-assistance feature at the center of a formal inquiry into crash causation. That probe is not just about near-misses or theoretical risks, it is about incidents where property was damaged and people were put in harm’s way.

In that context, the red-light and lane-keeping complaints look less like isolated glitches and more like potential precursors to the kinds of collisions now under review. The federal investigation described in reports of Tesla auto safety probe FSD collisions underscores that regulators are no longer treating FSD as an experimental add-on but as a system whose real-world performance has direct consequences for crash statistics. When a feature that is supposed to assist with the driving task is instead alleged to have helped cause collisions, the burden shifts to Tesla to show that its safeguards and driver monitoring are strong enough to keep those risks in check.

Alleged traffic law violations push Tesla into legal and political crosshairs

As the safety probes deepen, a parallel narrative is emerging around basic compliance with traffic laws. Tesla is now under federal investigation over self-driving cars allegedly breaking those laws, with complaints that the vehicles, when using advanced driver-assistance features, have failed to obey signals and lane markings that every human driver is expected to follow. That framing moves the conversation from abstract questions about artificial intelligence to a more concrete issue: whether a mass-market product is repeatedly violating the same rules that underpin road safety.

The legal and political implications of that shift are significant. When federal investigators and local authorities see patterns of alleged lawbreaking, they start to ask not only whether a defect exists but whether the company has been sufficiently transparent about the system’s limitations and risks. Coverage of how Tesla is under federal investigation over self-driving cars allegedly breaking traffic laws highlights that complaints now explicitly reference violations like running red lights, which are easy for regulators and the public to understand. I see that clarity as a turning point: it is much harder to dismiss concerns as technical misunderstandings when the alleged behavior maps directly onto familiar infractions.

What the growing complaint file means for Tesla drivers

For Tesla owners who use Full Self-Driving on daily commutes, the expanding complaint file has immediate practical implications. Even if their own vehicles have never blown a light or jumped a lane, they now know that at least 80 other drivers have told The National Highway Traffic Safety Administration that something like that happened to them while the system was active. That knowledge can subtly change how a driver supervises the software, encouraging more hands-on vigilance at intersections and during lane changes, and potentially reducing the convenience that drew many to FSD in the first place.

There is also a financial dimension. If the Office of Defects Investigation ultimately concludes that a defect exists, Tesla could be required to push out software changes or even limit certain features, which would directly affect the value proposition of a package that many owners paid thousands of dollars to unlock. At the same time, the detailed accounts compiled in resources like Tesla FSD Safety Complaints Detail Traffic Light Violations give drivers a clearer picture of the specific scenarios where they may want to be especially cautious, such as complex intersections with multiple signal phases or highways with ambiguous lane markings. From my vantage point, the message to owners is not that the system is unusable, but that it demands a level of skepticism and oversight that sits uneasily with the “Full” self-driving label.

How Tesla’s branding collides with regulatory expectations

Part of what makes this investigation so charged is the gap between Tesla’s branding and the legal reality of its technology. The company markets its most advanced package as Full Self-Driving, yet regulators consistently describe it as a driver-assistance system that still requires an attentive human behind the wheel. When The National Highway Traffic Safety Administration and its Office of Defects Investigation review complaints about red-light runs and lane departures, they are doing so within a framework that assumes the human is ultimately responsible, even as the software takes on more of the driving task.

That tension is evident in the way federal officials frame their questions to Tesla. They are not only asking whether the software can technically handle traffic lights and lane lines, they are probing how the company communicates its capabilities and limitations to drivers who may over-trust the system. The widening probe described in coverage of how Tesla Faces Tough Questions From US Regulator As FSD Probe Widens With More Complaints And Incident Reports underscores that regulators are scrutinizing not just code, but messaging. From my perspective, the more complaints pile up about basic traffic violations, the harder it becomes for Tesla to argue that any misuse is purely the result of driver misunderstanding rather than a foreseeable consequence of its own branding.

The broader stakes for automated driving in the United States

Although Tesla is at the center of this particular storm, the outcome will ripple across the entire automated driving landscape. If The National Highway Traffic Safety Administration concludes that a high-profile system like Full Self-Driving has a defect related to traffic control and lane discipline, it will set a precedent for how aggressively regulators police similar features from other automakers and tech companies. That could influence everything from how advanced driver-assistance systems are tested and validated to how they are marketed to consumers who may not grasp the nuances between “assist” and “autonomy.”

At the same time, the investigation is forcing a broader public conversation about what level of imperfection is acceptable in software that shares the road with human drivers. Human beings run red lights and drift out of lanes every day, often with tragic results, yet society has long tolerated that risk. When a machine makes the same mistakes, especially at scale and under the banner of cutting-edge innovation, the tolerance threshold appears much lower. As I weigh the growing list of complaints and the formal scrutiny from ODI and other federal investigators, it is clear that the bar for automated systems is being set higher than for humans, and that Tesla’s current troubles will help define where that bar ultimately lands.

More from MorningOverview