ort/Unsplash

Federal regulators have opened a sweeping investigation into Tesla’s self-driving technology after a series of crashes and traffic violations tied to the company’s driver-assistance software. The probe, which covers nearly 2.9 million Teslas on U.S. roads, raises fundamental questions about how far automakers can push automation before safety regulators and the public push back.

At stake is not just the future of Tesla’s so-called Full Self-Driving system, but the broader trajectory of autonomous driving in the United States, from how traffic laws are enforced to how much risk drivers are willing to accept from software that promises convenience but can misread the road. I see this investigation as a pivotal test of whether regulators can keep pace with fast-evolving technology without freezing innovation in place.

Regulators zero in on nearly 3 million Teslas

The new federal probe is remarkable in scope, touching almost every Tesla sold in the United States that is equipped with the company’s advanced driver-assistance features. Federal officials are examining nearly 2.9 million Teslas after reports that the software contributed to crashes, including incidents where vehicles allegedly failed to respond properly to traffic controls and other cars. The scale of the review signals that the Feds are no longer treating these as isolated glitches, but as a potential systemic safety problem affecting a massive fleet of vehicles.

Investigators are focusing on the company’s Full Self-Driving package, which Tesla markets as an optional upgrade on models like the Model 3, Model Y, Model S, and Model X, and which is now under scrutiny for its behavior in real-world traffic. Complaints and crash reports have prompted the Feds to probe nearly 3M Teslas and to revisit earlier safety questions that have lingered around the brand’s automation features for over three years, a shift that underscores how regulatory patience with incremental software tweaks may be wearing thin as the fleet grows and the stakes rise. That broad scope is reflected in the decision to open a formal investigation into nearly 2.9 million cars equipped with the self-driving system, a move detailed in a federal notice that described the review as a probe into the company’s Full Self-Driving technology impacting nearly 3M Teslas, which I see as a clear sign that regulators are treating this as a fleetwide risk rather than a niche software bug, a concern captured in the description that the Feds probe nearly 3M Teslas.

Crash reports, red lights, and alleged traffic violations

At the heart of the investigation are specific allegations that Tesla’s automation has not just failed to prevent collisions, but in some cases may have actively contributed to them. Federal regulators are reviewing reports that the self-driving system ran red lights, misjudged intersections, and was involved in crashes where the car did not behave as a cautious human driver would. Those reports include collisions where the vehicle allegedly proceeded through traffic signals or failed to yield, raising the possibility that the software is not consistently obeying the most basic rules of the road.

Complaints have also alleged that the self-driving system broke traffic laws in ways that would normally result in a ticket if a human driver were behind the wheel, including accusations that Teslas on the system rolled through stop signs or accelerated into intersections against the light. Those claims have now drawn a formal response, with federal officials opening a probe into self-driving cars allegedly breaking traffic laws and examining whether the software’s decision-making is compatible with existing traffic codes. The investigation is looking at reports of crashes and running red lights that prompted a Federal probe of Tesla self-driving system after reports of crashes running red lights, and it is also weighing complaints that the technology has been placed under a federal investigation over self-driving cars allegedly breaking traffic laws, a set of allegations that regulators are now unpacking in detail as they review how the system behaves in real-world traffic, including the complaints that complaints have alleged that the software violated traffic rules.

What Tesla says Full Self-Driving can, and cannot, do

Even as the investigation intensifies, Tesla has consistently argued that its technology is being misunderstood. The company has repeatedly said the system cannot drive itself and that human drivers must be ready to intervene at all times, a message that appears in its documentation and on-screen warnings. In Tesla’s framing, Full Self-Driving is an advanced driver-assistance suite that can handle tasks like lane changes, highway navigation, and city-street turns, but it is still a Level 2 system that relies on an attentive driver with hands on the wheel and eyes on the road.

That distinction between marketing name and technical capability is central to the regulatory debate. Tesla continues to test its so-called Full Self-Driving technology on public roads, and the company has pitched it as a way to reduce driver workload, with some owners treating it as a near-autonomous chauffeur that lets them relax or even look away from the road. Regulators are now asking whether that gap between the company’s assurances that the system cannot drive itself and the way some drivers actually use it has created a safety hazard, especially as the company is still testing its so-called Full Self-Driving technology while telling drivers they must be ready to intervene. The tension is captured in federal descriptions that Tesla has repeatedly said the system cannot drive itself and human drivers must be ready to intervene at all times, even as the company is still testing its so-called Full Self-Driving technology on public roads, a dynamic that has drawn scrutiny in a federal summary that notes that the company is still testing the system while insisting on driver vigilance.

How the new probe fits into years of scrutiny

The current investigation does not emerge from a vacuum. Tesla’s automation features have been under some form of federal scrutiny for over three years, with earlier probes focused on Autopilot’s performance in crashes involving stationary emergency vehicles and other unusual scenarios. Those earlier cases raised questions about whether the company’s driver monitoring was robust enough and whether the software could reliably detect and respond to hazards that fall outside the neat lanes and predictable flows of highway driving.

What has changed now is the breadth of the concerns and the maturity of the technology. Full Self-Driving has moved from a limited beta to a widely deployed option, and the number of Teslas using it has grown into the millions, which means even rare failure modes can translate into a significant number of real-world incidents. Federal regulators are now layering this new probe on top of that history, effectively asking whether the incremental software updates Tesla has pushed over the air have adequately addressed earlier safety flags or whether the underlying approach to automation needs a deeper rethink. In that context, the decision to open a U.S. auto safety probe after reports that FSD ran red lights and caused collisions, and to treat it as a new federal investigation into the company’s FSD system, shows that regulators see this as an escalation rather than a routine follow-up, a shift reflected in the description that Tesla faces U.S. auto safety probe after reports FSD ran red lights caused collisions and that Tesla faces a new federal investigation into its FSD system, a development captured in the summary that Tesla faces U.S. auto safety probe over FSD collisions.

The legal stakes: traffic laws and California’s new rules

Beyond crash statistics, the investigation is forcing regulators to confront a thorny legal question: who is responsible when a car on a driver-assistance system breaks the law. Traditionally, traffic enforcement has focused on human drivers, but as software takes over more of the driving task, states and federal agencies are grappling with how to assign liability when a vehicle on Full Self-Driving rolls through a stop sign or runs a red light. The current probe is explicitly examining whether Tesla’s system is compatible with existing traffic codes, and whether the company’s design choices encourage behavior that would be illegal if a human driver made the same decisions unaided.

California is emerging as a key testbed for these questions. A new law in California is set to hold driverless car companies accountable for traffic violations, a shift that could reshape how companies design and deploy autonomous systems in one of the country’s largest car markets. While Tesla maintains that its system is not fully driverless, the California law signals that policymakers are preparing for a world where software, not just humans, can be cited for traffic offenses. That change is particularly relevant as the U.S. launches a probe into nearly 2.9 million Tesla cars over crashes and traffic violations linked to its self-driving system, and as California moves to hold driverless car companies accountable for traffic violations, a combination that could redefine how responsibility is shared between drivers and manufacturers, a shift described in the federal notice that a new law in California would hold driverless car companies accountable for traffic violations.

Wall Street reaction and Tesla’s market narrative

The regulatory pressure is not just a safety story, it is a market story. Tesla’s valuation has long been tied to the promise that its software, particularly Full Self-Driving, would unlock new revenue streams and justify a premium over traditional automakers. When federal regulators open a probe into the core technology behind that narrative, investors take notice, and the company’s share price tends to reflect that shift in sentiment almost immediately.

As details of the investigation emerged, Tesla shares fell, with Market sentiment turning defensive as traders reassessed the risk that regulators could force software changes, limit deployment, or even mandate recalls. The prospect that the company’s Full Self-Driving system might face stricter oversight, or that the brand could be associated in the public mind with crashes and traffic violations, weighs on the growth story that has fueled Tesla’s rise. That reaction is captured in reports that Tesla shares slip as U.S. regulators probe self-driving software risks and that Market sentiment turns defensive as investors digest the latest probe into the company’s Full Self-Driving (FSD) system, a shift summarized in the description that Tesla shares fell on Thursday as regulators scrutinized the FSD system.

Inside the federal safety questions

From a safety perspective, regulators are probing several intertwined issues. One is whether Tesla’s system reliably recognizes and responds to traffic signals, stop signs, and other vehicles in complex urban environments, where unpredictable human behavior and dense infrastructure can confuse even experienced drivers. Another is whether the company’s driver monitoring is sufficient to ensure that humans remain engaged and ready to take over, especially when the system encounters a situation it cannot handle.

Federal investigators are also examining how Tesla’s software handles edge cases, such as unusual intersections, temporary construction zones, or emergency vehicles, and whether the system’s behavior in those scenarios meets the standard of care expected of a human driver. The reports of crashes and running red lights that triggered the Federal probe of Tesla self-driving system after reports of crashes running red lights suggest that regulators are particularly concerned about scenarios where the system appears to have misread or ignored clear traffic controls. Those concerns are now being formalized in a Federal probe of Tesla self-driving system after reports of crashes running red lights, a process that will likely involve detailed data requests, on-road testing, and a close look at how the company validates its software before pushing updates to the fleet, a scrutiny reflected in the description that the Federal probe of Tesla self-driving system was launched after reports of crashes and running red lights.

The broader self-driving landscape and what comes next

Although Tesla is the focus of this particular investigation, the questions it raises extend across the self-driving industry. Every company working on automated driving, from robotaxi operators to traditional automakers, is watching to see how regulators define acceptable behavior for software that shares the road with human drivers. If federal officials conclude that Tesla’s approach to automation is too aggressive or too reliant on human backup, that finding could ripple through the sector and influence how other firms design and market their own systems.

For drivers, the probe is a reminder that the road to autonomy is not a straight line. The promise of cars that can handle most of the driving task is alluring, especially for long commutes and congested city streets, but the reality is that partial automation can create new kinds of risk if drivers overtrust the system or if the software behaves in ways that are hard to predict. As the Feds probe nearly 3M Teslas and the U.S. opens a Tesla probe after more crashes involving its so-called Full Self-Driving technology, the outcome will help determine how quickly the country moves toward more automated roads and how much responsibility remains in the hands of the person behind the wheel, a balance that regulators are now reevaluating as they open a U.S. probe after more crashes involving its so-called Full Self-Driving technology, a step described in the summary that the U.S. opens Tesla probe after more crashes involving its so-called Full Self-Driving technology and that Tesla faces U.S. auto safety probe over FSD collisions.

More from MorningOverview