Federal regulators are scrutinizing nearly 3 million Tesla vehicles after reports that cars using the company’s advanced driver-assistance software veered into oncoming traffic and were involved in serious crashes. The probe zeroes in on how Tesla’s “Full Self-Driving” technology behaves in real-world traffic, and whether the system’s design encourages drivers to trust it more than they safely should.
At stake is not only the safety of Tesla owners but also the broader public’s confidence in automated driving, as investigators weigh whether a flagship technology from one of the world’s most prominent carmakers is creating unacceptable risks on ordinary roads.
The new federal probe and what triggered it
Regulators opened the latest investigation after a pattern of alarming incidents in which Teslas reportedly ran red lights, made unsafe turns and, in some cases, steered into opposing lanes. According to federal documents, the inquiry covers nearly 3 million cars equipped with the company’s advanced driver-assistance features, reflecting concern that the problems are systemic rather than isolated glitches. The scope includes vehicles that can receive over-the-air software updates, which means the behavior of the system can change rapidly across the entire fleet.
In a formal notice, the National Highway Traffic Safety Administration detailed allegations that the software could contribute to wrong-way driving, abrupt lane changes into oncoming traffic and other hazardous maneuvers, prompting a preliminary evaluation of the affected vehicles. That notice, filed as part of an Office of Defects Investigation case, outlines how the agency is gathering crash data, software logs and design information from Tesla to determine whether a defect exists and whether a recall is warranted, as reflected in the public investigation record.
Nearly 3M Teslas under the microscope
The scale of the probe is striking, with Feds examining nearly 3M Teslas that use the company’s most advanced driver-assistance package. That figure effectively covers almost every Tesla sold in the United States with the hardware capable of running the software, underscoring how central the technology has become to the brand’s identity and marketing. The investigation spans multiple model years of the Model 3, Model Y, Model S and Model X, all of which can run the same core code even if their hardware configurations differ.
Reports describe this as the latest in a series of federal reviews of Tesla’s automated driving claims, noting that regulators have been looking at crash patterns and driver behavior around these systems for more than three years. The new case builds on that history by focusing on whether the software’s current design and updates have introduced fresh risks, with one account emphasizing that Feds are now probing nearly 3M Teslas after a string of incidents linked to self-driving tech and renewed scrutiny of Elon Musk’s public assurances about the system’s capabilities, as detailed in a recent overview of the federal probe.
How Tesla’s “Full Self-Driving” fits into the case
At the center of the investigation is Tesla’s “Full Self-Driving” option, a software package that the company sells as an upgrade promising automated lane changes, traffic light recognition and navigation on city streets. Despite the branding, regulators classify The FSD as a Level 2 driver-assistance system, which means it can control steering and speed in certain conditions but still requires a human driver to remain fully engaged and ready to intervene at any moment. That legal distinction is crucial, because it shapes how much responsibility Tesla bears for the system’s behavior versus the driver’s actions.
Regulators are examining whether the way The FSD operates, and the way it is marketed, may lead drivers to overestimate its capabilities and pay less attention to the road. Reports note that the Level designation reflects a system that is supposed to assist, not replace, the driver, yet some crashes under review involve scenarios where the car allegedly ran red lights or failed to yield, suggesting that drivers may have trusted the software to handle complex urban situations on its own. One detailed account explains that The FSD system under investigation is classified as Level 2 driver-assistance software that still requires drivers to pay full attention, even as its expanding feature set and price point signal a more ambitious vision, a tension highlighted in a recent summary of the federal probe.
Wrong-way driving, red lights and crash patterns
The most disturbing allegations involve Teslas that appeared to steer themselves into oncoming traffic or proceed through intersections against the signal while the automated system was active. Investigators are looking closely at reports of vehicles veering into opposing lanes, making unexpected lane changes and failing to respond appropriately to traffic lights, all of which can quickly escalate into head-on collisions or side-impact crashes. These behaviors go to the heart of whether the software’s decision-making logic is robust enough for dense, unpredictable city streets.
Safety officials have said the investigation was initiated after a series of complaints about cars running red lights and veering into oncoming lanes, along with reports of crashes and fires that may be linked to the automated features. Those concerns are now being weighed against Tesla’s claims that its software improves safety by reducing human error, with regulators emphasizing that any system that can steer a vehicle must be held to a high standard of reliability. One account notes that the Safety agency opened the case after reports of vehicles running red lights and veering into oncoming lanes, and that US auto safety regulators are now probing nearly 3M Tesla cars for self-driving safety risks, as described in a detailed account of the safety concerns.
What the 2.9 m figure tells us about risk
The investigation’s reach, covering 2.9 m vehicles, underscores how software-driven features can scale potential safety issues across an entire fleet almost instantly. Unlike a mechanical defect limited to a specific batch of parts, a problematic line of code can affect every car that downloads a particular update, which is why regulators are increasingly focused on how companies test and validate software before pushing it to customers. For Tesla owners, the 2.9 m figure is a reminder that they are part of a vast, interconnected experiment in automated driving, one that regulators are now trying to map and understand.
Reports on the probe emphasize that the U.S. has launched an inquiry into nearly 2.9 m Tesla cars over crashes linked to the self-driving system, with investigators zeroing in on traffic violations such as running red lights and lane changes into opposing traffic. That framing highlights how the case is not just about isolated crashes but about whether the system’s behavior systematically increases the likelihood of certain dangerous maneuvers. One detailed breakdown explains that U.S. regulators are examining nearly 2.9 m Tesla vehicles for patterns of traffic violations, including lane changes into opposing traffic, as part of a broader look at how the software handles complex road rules, a focus captured in a recent report on the 2.9 m vehicle probe.
Inside the wrong-side-of-the-road allegations
Among the most vivid accounts are those describing Teslas that appeared to drive on the wrong side of the road while automated features were engaged. These incidents cut directly against public expectations that driver-assistance technology should at least avoid obvious, high-consequence errors like entering an opposing lane. They also raise questions about how the system interprets lane markings, temporary construction zones and complex intersections where visual cues can be ambiguous.
One detailed report describes how Tesla is being investigated by the U.S. after claims that cars using its self-driving technology ended up on the wrong side of the road, with some of the crashes resulting in injuries. That account, by technology reporter Imran Rahman-Jones, notes that images from Bloomberg via Getty Images have helped illustrate the scale and seriousness of the incidents, while also emphasizing that the company disputes that its software is unsafe when used as directed. The same report explains that Tesla is being investigated after allegations that its cars, while using automated features, drove on the wrong side of the road and were involved in crashes that caused injuries, a concern laid out in a detailed account by Imran Rahman.
How owners and investors are reacting
For Tesla owners, the probe has sharpened an already intense debate over how much to trust the company’s automated features. Some drivers say the technology makes long commutes less stressful and reduces fatigue, while others report unnerving behavior such as phantom braking, abrupt lane changes or confusion at complex intersections. The investigation has also prompted fresh questions about whether owners fully understand that they must remain in control at all times, despite the “Full Self-Driving” branding and the car’s apparent ability to handle routine tasks on its own.
Investor reaction has been equally charged, with some seeing the probe as a predictable step in the maturation of automated driving and others warning that regulatory pushback could slow Tesla’s growth story. One account quotes money manager Ross Gerber, who said, “Added money manager Ross Gerber, ‘The world has become a giant testing ground for Elon’s concept of full self-driving,’” capturing a broader unease about how quickly the technology has been deployed. That same report notes that the probe follows earlier crashes, including one in which a pedestrian was killed, and that regulators are now looking at whether Tesla’s approach to real-world testing has outpaced safety oversight, as described in a detailed analysis quoting Ross Gerber and Elon.
Crash reporting, NHTSA and a history of tension
The current probe does not exist in a vacuum, it follows earlier clashes between Tesla and federal safety regulators over how the company reports crashes involving its automated systems. Over the summer, Tesla faced a new federal inquiry after NHTSA said it had received allegations that the company was not reporting certain crashes in a timely or complete manner, potentially obscuring patterns that could point to safety defects. That history has shaped how regulators approach the new case, with a heightened focus on data transparency and the accuracy of Tesla’s submissions.
One detailed account explains that Tesla faces a U.S. auto safety probe over faulty crash reporting after NHTSA found potential gaps in the company’s disclosures, raising concerns about whether regulators were getting a full picture of incidents involving automated features. Another report notes that, if the allegations are true, NHTSA said it hopes to prove that the system has been fixed so reports come in a timely fashion, underscoring how critical accurate data is to the agency’s mission. That perspective is reflected in a recent summary of Tesla’s clash with NHTSA and in a separate account that quotes NHTSA officials saying that, if it is true that crash reports were delayed, they want to ensure the system is corrected so the traffic safety agency receives timely information, as described in a detailed report on delayed crash reports.
What past safety battles reveal about today’s stakes
To understand the stakes of the Tesla probe, it helps to look at how NHTSA has handled major safety controversies in the past. The agency has long relied on detailed data collection and public reporting to identify patterns, from crash databases to specialized evaluations of vehicle components. For example, an earlier evaluation of automobile parts content labeling under the American Automobile Labeling Act shows how regulators use formal reviews to assess whether information provided to consumers is accurate and useful, a process that can be followed through public summaries available on the Internet for those who want to see how such reviews unfold.
Another instructive case is the high-profile Toyota floor mat issue, where concerns about unintended acceleration led to a series of recalls and intense scrutiny of both mechanical design and driver behavior. In that episode, the most recent NHTSA (the National Highway Traffic Safety Administration) report was made available through official channels, and Toyota emphasized that it could be reproduced in full form by media without charge, illustrating how transparency and documentation are central to restoring public trust after a safety scare. Those precedents, captured in a public evaluation report on labeling and in a detailed statement on the Toyota floor mat recall, suggest that the Tesla case could ultimately hinge not only on technical fixes but also on how openly the company engages with regulators and the public.
Where the investigation goes from here
From here, NHTSA’s preliminary evaluation could evolve into an engineering analysis, a recall, or a decision that no defect exists, depending on what the data show. Investigators will be looking at crash reports, vehicle logs and software design documents to determine whether the automated features behave as intended and whether those intended behaviors are safe in the first place. They will also weigh how Tesla communicates the system’s limitations to drivers, including on-screen warnings, owner’s manuals and marketing materials.
Public pressure is likely to remain intense as owners share their experiences and as videos of unusual behavior circulate online. One widely viewed segment titled “2.9 Million Tesla Cars Face Probe After Owners Report …” captures how social media and video platforms have become part of the feedback loop, amplifying both legitimate safety concerns and misunderstandings about how the technology works. That clip, which asks, “do you own a Tesla. have you been having issues with your Tesla’s self-driving feature seems many are based on multiple…,” reflects a growing sense among some drivers that the software is still a work in progress, a sentiment that regulators will have to factor into their assessment of real-world risk, as seen in the widely shared video on 2.9 Million Tesla Cars.
The broader implications for Elon Musk and automated driving
For Elon Musk, whose public persona and business strategy are tightly bound to the promise of autonomous driving, the probe is a direct test of his long-standing claim that software will make Teslas dramatically safer than human drivers. Investors and competitors alike are watching to see whether regulators ultimately validate that vision or conclude that the current implementation falls short of acceptable safety standards. The outcome could influence not only Tesla’s valuation but also how other automakers and tech companies roll out their own advanced driver-assistance systems.
One detailed account notes that Feds are probing nearly 3M Teslas after crashes linked to self-driving tech, describing it as the latest effort from regulators to scrutinize Elon Musk’s electric car maker, which has faced federal probes for years over its automated driving claims. That history suggests that the current case is part of a broader negotiation between innovators pushing the boundaries of software-driven mobility and regulators tasked with keeping roads safe for everyone. As one analysis of the situation explains, Feds are now examining nearly 3M Teslas in a probe that could shape the future of automated driving and the regulatory environment around Elon Musk’s ambitions, a dynamic captured in a detailed report on the federal scrutiny of Elon Musk.
More from MorningOverview