Morning Overview

Tesla’s Full Self-Driving faces possible federal recall scrutiny

The National Highway Traffic Safety Administration is tightening its focus on Tesla’s Full Self-Driving software, with a recent agency memo signaling that an existing investigation into reduced-visibility crashes could advance toward enforcement action, including a possible recall. The scrutiny arrives as Tesla prepares to sell vehicles without steering wheels, raising the stakes for the company’s autonomy strategy and for the millions of drivers who rely on its driver-assistance features.

How a Fatal Crash Triggered a Federal Probe

NHTSA opened its investigation, designated PE24031, after a pedestrian was killed in low-visibility conditions while a Tesla operating with FSD was engaged. The agency’s stated investigative focus centers on how the system performs when environmental factors like fog, glare, or darkness limit sensor perception. That question is not academic: it tests whether FSD can safely handle the real-world driving scenarios Tesla markets it for.

The probe sits within a broader pattern. NHTSA’s crash-reporting framework, known as the Standing General Order, requires manufacturers to disclose incidents involving vehicles equipped with automated driving systems or Level 2 advanced driver-assistance systems. Tesla’s FSD falls under the Level 2 ADAS category, meaning the company must report qualifying crashes to the agency. NHTSA then uses that incident data to identify safety trends and decide whether to open or escalate investigations.

This pipeline from crash report to probe to potential enforcement is the mechanism now bearing down on Tesla. The Standing General Order does not just collect data passively; it feeds directly into the agency’s decision-making about whether a defect exists and whether a recall is warranted. Each reported crash involving FSD becomes another data point in NHTSA’s assessment of whether the technology performs as advertised in a wide range of conditions, including those that challenge human drivers.

A New Memo Points Toward Enforcement

The investigation took a sharper turn when a recent memo indicated that PE24031 could escalate toward enforcement action, potentially including recall scrutiny. That language matters. NHTSA investigations follow a structured path: a preliminary evaluation can be upgraded to an engineering analysis, and from there the agency can issue a recall demand if it finds an unreasonable safety risk. The memo’s framing suggests the agency sees enough evidence to consider moving beyond fact-gathering.

At the same time, Tesla was granted additional time to respond to the agency’s information requests, according to correspondence cited in coverage of the probe. Extensions are not unusual in complex investigations, especially when regulators seek detailed technical data, logs, and internal analyses. But the combination of an extension and escalation language suggests the agency is building a comprehensive record before deciding on next steps.

The timing also carries weight. Tesla is actively preparing to sell cars without steering wheels, a move that would represent one of the most aggressive commercial bets on autonomous driving by a major automaker. If NHTSA concludes that FSD cannot reliably handle reduced-visibility conditions in vehicles that still have steering wheels and human backup, the regulatory path for a fully driverless Tesla becomes far more difficult. Any finding that FSD poses an unreasonable risk in specific scenarios could force Tesla to redesign core elements of its autonomy stack before regulators would even consider approving a steering-wheel-free vehicle for public roads.

What NHTSA’s Process Reveals About the Risk

Understanding how NHTSA publishes investigation materials helps explain why the public record on this probe remains incomplete. The agency’s guidance on investigations and recalls explains that documents such as opening resumes, information requests, and correspondence are posted to the agency’s website, but only after redaction for privacy and confidential business information. That redaction process means some details about what NHTSA has found, and what Tesla has disclosed, may not surface until the investigation reaches a formal conclusion.

This opacity cuts both ways. It protects Tesla’s proprietary data and the personal information of crash victims, but it also limits the ability of independent researchers, journalists, and the public to evaluate whether FSD’s reduced-visibility performance represents a systemic flaw or a narrow edge case. The distinction matters enormously: a systemic issue could trigger a recall affecting every vehicle running FSD software, while an isolated failure might result in a more targeted fix or a narrower set of software changes.

Still, the fact that NHTSA has moved beyond simple data collection into a formal defect investigation signals that the agency sees more than an isolated anomaly. Preliminary evaluations are typically reserved for patterns that suggest a recurring safety concern. In this case, the pattern appears to involve how FSD perceives and responds to pedestrians and other road users when visibility is degraded, a domain that has long challenged both human drivers and automated systems.

Tesla’s Own Filings Acknowledge the Exposure

Tesla itself has signaled awareness of the regulatory risk. The company’s Form 10-Q for the quarter ended March 31, 2025, includes risk factors and legal proceedings language addressing government investigations into its autonomy and driver-assistance features. SEC filings of this type are carefully reviewed by corporate counsel, and the inclusion of such language reflects a material assessment that regulatory action could affect the business.

That disclosure is not a throwaway legal boilerplate. When a company flags regulatory proceedings in a quarterly filing, it is signaling to investors that the outcome could have financial consequences. For Tesla, a recall of FSD software would not necessarily mean pulling cars off the road. Software recalls typically involve over-the-air updates that can be deployed without a visit to a service center. But the reputational and strategic damage of a federal recall finding would be significant, particularly as Tesla positions FSD as the foundation of its robotaxi ambitions and as a key differentiator in an increasingly crowded EV market.

Moreover, the company’s own acknowledgment that regulatory scrutiny could impact its operations may influence how NHTSA frames any eventual enforcement action. Regulators often look to corporate disclosures to assess whether a manufacturer has fully appreciated and communicated safety risks. If Tesla has told investors that government investigations into FSD could be material, it becomes harder to argue that any resulting recall or software limitation is a minor business issue.

Why Low-Visibility Performance Is the Central Question

Much of the public conversation around autonomous driving safety focuses on highway performance or urban intersection handling. The NHTSA investigation highlights a different and arguably more demanding challenge: how well these systems perform when visibility degrades. Rain, fog, dust, low sun angles, and darkness all reduce the quality of data that cameras and sensors can collect. For a system like FSD that relies heavily on vision-based processing, these conditions represent a stress test that software updates alone may not fully resolve.

The fatal pedestrian incident that prompted PE24031 illustrates the stakes in human terms. A person died while a Tesla with FSD engaged was operating in conditions where the system’s perception may have been compromised. Whether that failure was caused by a software limitation, a sensor hardware constraint, or a gap in the system’s training data is precisely what NHTSA is trying to determine. The answer will shape not just Tesla’s regulatory future but also the broader debate over how and when automated driving systems should be deployed on public roads.

If NHTSA ultimately concludes that FSD has a defect related to reduced-visibility performance, the agency could require Tesla to modify how and where the system can be used. That might mean constraining FSD operation in certain weather or lighting conditions, changing how the system alerts drivers to take over, or imposing stricter requirements on driver attention monitoring. Any of those outcomes would undercut Tesla’s narrative that FSD is steadily progressing toward full autonomy under all normal driving conditions.

What Comes Next for Tesla and Regulators

For now, PE24031 remains an open investigation, and NHTSA has not announced a formal move to an engineering analysis or recall demand. The agency will continue reviewing crash data, Tesla’s technical submissions, and any additional incidents reported under the Standing General Order. Tesla, for its part, is likely to emphasize software improvements, updated training data, and refinements to driver monitoring as evidence that FSD is becoming safer over time.

The outcome will set an important precedent. If regulators determine that vision-centric systems like FSD can meet safety expectations even in degraded visibility, it will bolster the case for more aggressive deployment of advanced driver assistance and higher levels of automation. If they conclude the opposite, it could force a rethink of both Tesla’s roadmap and the broader industry assumption that software alone can overcome the hardest edge cases of real-world driving.

Either way, the current probe underscores a simple reality: as automated driving features move closer to full autonomy, the margin for error in conditions like foggy nights and glare-filled dawns shrinks dramatically. NHTSA’s evolving scrutiny of Tesla’s FSD in low-visibility scenarios is not just about one tragic crash; it is a test of whether the technology can safely shoulder responsibilities that, until now, have belonged to human drivers alone.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.