The National Highway Traffic Safety Administration escalated its investigation into Tesla’s Full Self-Driving software on March 19, 2026, expanding the scope to cover up to 3.2 million vehicles after a series of crashes in low-visibility conditions. The agency’s decision, outlined in a memo dated March 18, centers on nine reported collisions involving fog, sun glare, and other reduced-visibility scenarios, including at least one fatal pedestrian strike. For the millions of Tesla owners who rely on FSD for daily driving, the widened probe raises direct questions about whether the system can safely handle the conditions drivers encounter most often on real roads.
Nine Crashes Triggered the Expansion
The probe, designated PE24031, grew out of incident data collected under the agency’s Standing General Order (SGO 2021-01), a framework that requires manufacturers to submit standardized reports on crashes involving advanced driver-assistance and automated driving systems. Those reports, available as downloadable datasets with defined data elements, gave federal investigators a structured view of how FSD-equipped Teslas behaved in the moments before impact.
The nine crashes that prompted the expansion all share a common thread: the vehicles were operating in conditions where visibility was compromised. Fog, sun glare, and airborne dust appear repeatedly across the incident reports. In at least one case, a pedestrian was killed in low-visibility conditions, an outcome that sharpened the urgency behind the agency’s decision to widen the investigation rather than close it.
The Wall Street Journal reported that NHTSA identified several crashes where FSD did not appropriately respond in those scenarios. That language matters because it signals the agency believes the system itself, not just the driver, may bear responsibility for the failures. Investigators are no longer simply asking whether human drivers misused the technology; they are examining whether the technology, as designed, is inherently unable to cope with certain common hazards.
According to an Associated Press summary of the memo, the expanded probe covers Tesla models dating back several years that can run the latest FSD software, including sedans and SUVs across the company’s lineup. The crashes under review occurred in multiple states and roadway types, suggesting the issue is not limited to a particular region, road design, or speed range.
Why Camera-Only Sensing Faces Hard Questions
Tesla’s FSD system relies entirely on cameras for environmental perception, having removed radar and ultrasonic sensors from its vehicles in recent years. That design choice works well in clear daylight but faces inherent physical limits when fog scatters light, when sun glare saturates image sensors, or when dust reduces contrast. The nine crashes in the expanded probe all occurred in exactly those degraded conditions, which suggests a pattern rather than isolated edge cases.
Most coverage of this probe has focused on the recall risk or Tesla’s stock implications. But the more consequential question is whether a camera-only architecture can ever meet the safety bar NHTSA expects for a system marketed as “Full Self-Driving.” If the agency concludes that the sensor suite itself is the limiting factor, software updates alone may not resolve the deficiency. That could force Tesla to confront a hardware problem across millions of vehicles already on the road, a far more expensive and logistically difficult fix than an over-the-air patch.
NHTSA classifies FSD as a Level 2 driver-assistance system, meaning the driver must remain fully attentive at all times, according to the agency’s automated vehicle guidance. But the probe implicitly asks a harder question: if the system cannot reliably detect hazards in fog or glare, does it matter whether the driver is paying attention? A human who trusts FSD to handle highway driving in poor visibility may not react quickly enough when the system fails to register a pedestrian or a stopped vehicle ahead. The investigation is therefore as much about human factors (how drivers behave when assisted by automation) as it is about the underlying algorithms.
Engineers who favor multi-sensor approaches argue that radar and lidar can see through some forms of obscured visibility that blind cameras. Tesla has taken the opposite bet, insisting that advances in neural networks and training data can overcome the limitations of optics alone. The nine low-visibility crashes now under federal scrutiny will test that thesis in a venue where the cost of being wrong is measured not in product delays but in injuries and deaths.
A Second Probe Adds Pressure
The low-visibility investigation does not exist in isolation. A separate NHTSA probe already covers approximately 2.9 million vehicles over alleged traffic-law violations by FSD, including running red lights and wrong-way driving. That investigation, which has its own NHTSA correspondence documenting complaint counts and incident patterns, targets a different failure mode but the same underlying software.
Together, the two probes span overlapping vehicle populations and paint a picture of a system that struggles both with basic traffic rules and with adverse environmental conditions. For Tesla, the combined regulatory exposure is significant. A recall stemming from either investigation could require changes to every FSD-equipped vehicle sold in the United States, and the 3.2 million figure from the expanded probe represents the upper bound of that exposure.
The earlier probe into alleged traffic violations already raised questions about how FSD handles intersections, lane selection, and speed control. Now, by adding low-visibility crashes to the mix, NHTSA is examining how the system behaves when its primary sensing modality is compromised. The two lines of inquiry intersect in a critical way: drivers may be least able to monitor and correct FSD’s mistakes precisely when the environment is most challenging.
What a Recall Could Mean for Tesla Owners
If NHTSA determines that FSD presents an unreasonable safety risk, the agency can compel a recall. Previous Tesla recalls related to Autopilot and FSD have been addressed through over-the-air software updates, which cost the company relatively little and require no dealership visits. But the limits on federal authority in these cases are clear. NHTSA can mandate a remedy that reduces risk, but it cannot unilaterally force Tesla to redesign its sensor hardware or abandon the camera-only strategy that underpins its current vehicles.
For drivers, the practical impact depends on what the investigation finds. A software-level fix might restrict FSD operation during certain weather conditions, effectively geofencing the feature by visibility rather than location. The system could be required to disengage or refuse activation when onboard diagnostics detect heavy fog, low sun angles, or other risk factors, leaving more of the driving task to humans in exactly the moments when they may have grown accustomed to automation.
A more severe outcome could disable core FSD capabilities until Tesla demonstrates that the system meets safety standards in degraded conditions. Either scenario would reduce the functionality that many owners paid thousands of dollars to access. Some drivers might welcome stricter limits if they restore confidence in the system’s behavior, while others would likely view them as a devaluation of a promised feature set.
There is also the question of transparency. Owners currently receive on-screen warnings that FSD is a beta product and that they must remain attentive, but the low-visibility crashes suggest that many may not fully grasp how quickly the system’s performance can degrade when the cameras cannot see clearly. A recall could therefore include not only code changes but also clearer in-car messaging about when and where FSD should be trusted.
Autonomy Ambitions Meet Regulatory Reality
The timing adds another layer of tension. Tesla has publicly discussed plans to sell vehicles configured for higher levels of automation, including concepts that envision limited or no traditional controls for human drivers. Expanding a safety probe into FSD’s performance in fog and glare at the same moment the company pushes toward removing driver controls entirely creates a sharp contradiction. The agency is questioning whether FSD can safely assist a human driver in bad weather while Tesla is betting on a future where software, not people, will handle the full driving task.
Regulators are not evaluating that future in the abstract; they are looking at concrete crash data from today’s roads. The nine low-visibility incidents under PE24031, combined with the alleged traffic-law violations in the separate 2.9 million-vehicle probe, form an empirical record that will shape how aggressively NHTSA is willing to allow automated systems to expand. If the current generation of FSD cannot reliably manage fog, glare, and basic traffic controls, arguments for rapid removal of steering wheels and pedals will be harder to sustain.
For now, Tesla owners remain in an uneasy middle ground. They are told to supervise a system that is powerful enough to handle much of the driving workload, but not yet robust enough to be trusted without human backup in the very conditions that most challenge human drivers. The expanded NHTSA investigation will determine whether that compromise can be made safer through incremental software changes, or whether the underlying approach to sensing and autonomy needs to be rethought before millions more vehicles rely on it.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.