What the Engineering Analysis Means
An engineering analysis is the second and more intensive phase of NHTSA’s defect investigation process. While a preliminary evaluation gathers initial data and complaints, an engineering analysis allows the agency to examine vehicle design, software behavior, and crash reconstruction evidence in detail. If the agency finds a safety defect at this stage, it can compel a manufacturer to issue a recall. The probe now covers about 3.2 million Tesla vehicles equipped with Full Self-Driving, a scope that reflects the broad deployment of the software across multiple model years and vehicle lines. That figure, also cited in a Reuters report, makes this one of the largest active NHTSA investigations by vehicle count. The breadth of vehicles under review signals that regulators see the pattern of crashes as potentially systemic, rather than confined to a narrow hardware configuration or a single software release. According to an Associated Press account, the engineering analysis will focus on how FSD behaves in real-world traffic, particularly when environmental conditions deviate from the ideal scenarios often featured in Tesla’s promotional materials. Investigators are expected to look at how the system interprets lane markings, responds to obstacles, and hands control back to drivers when its confidence drops.Nine Crashes in Low-Visibility Conditions
The triggering incidents share a common thread: drivers relying on FSD in conditions where visibility was impaired. A separate AP story on the crashes notes that the cases involve situations with fog, sun glare, and other environmental obstructions. In each, the FSD system either failed to detect a hazard or responded too late for the driver to intervene effectively, raising questions about how the software handles edge cases that human drivers routinely confront. This pattern points to a specific technical weakness. Tesla’s FSD relies heavily on camera-based perception rather than lidar or radar, a design choice that CEO Elon Musk has defended for years as sufficient for full autonomy. Cameras, however, are inherently sensitive to lighting extremes and atmospheric interference. When glare washes out a frame or fog scatters light, the system’s ability to identify obstacles can degrade in ways that a human driver might partially compensate for but that software may not flag quickly enough. The distinction matters because FSD is classified as a Level 2 driver-assistance system, meaning the human behind the wheel is supposed to remain attentive and ready to take over at all times. Yet if the software appears to function normally in poor visibility, drivers may not realize they need to intervene until it is too late. That gap between perceived capability and actual performance is exactly the kind of potential defect NHTSA’s engineering analysis is designed to evaluate, using both crash data and technical documentation from Tesla.How NHTSA Tracks Crash Patterns
The agency’s ability to spot trends like this depends on structured data collection. NHTSA operates the Crash Report Sampling System, a nationally representative dataset drawn from police-reported crashes across the country. CRSS provides the statistical backbone for identifying whether a particular vehicle feature or technology is associated with an unusual rate or type of collision, helping regulators decide which issues warrant intensive follow-up. For advanced driver-assistance systems like FSD, this data infrastructure is especially important. Traditional crash investigations focus on mechanical failures or driver impairment. Software-driven crashes require a different analytical framework, one that accounts for sensor inputs, algorithmic decision-making, and the interaction between automated systems and human attention. By moving to an engineering analysis, NHTSA gains authority to demand detailed technical evidence from Tesla, including software logs, sensor data, and internal testing records related to the nine flagged incidents. Investigators can then compare these records with external crash reports to reconstruct what the system “saw” and how it decided to act in the moments before impact. That reconstruction will be central to determining whether the crashes stemmed from a correctable software bug, a broader design limitation in camera-only perception, or misuse by drivers who may have overestimated the system’s capabilities.Tesla’s Autonomy Ambitions Face Friction
The timing of this escalation creates a direct conflict with Tesla’s commercial strategy. The company has been preparing to market vehicles that lack traditional steering wheels and pedals, a product concept that assumes FSD can operate without any human fallback. Regulatory approval for such a design would require NHTSA to accept that the software meets safety standards far beyond what is expected of a Level 2 assistance system. An active engineering analysis into FSD crashes makes that approval harder to justify in the near term. If the agency determines that FSD has a defect in handling reduced visibility, the implications extend well beyond a simple over-the-air patch. A formal recall could require Tesla to limit certain FSD functions in adverse weather, add new driver-monitoring safeguards, or fundamentally change how the system communicates its limitations to users. Any of those outcomes would complicate the case for a steering-wheel-free vehicle that assumes near-perfect autonomy. Musk has repeatedly promoted FSD’s capabilities in public statements, framing the software as close to full autonomy and central to Tesla’s valuation. Safety advocates argue that this messaging encourages drivers to over-trust the system, blurring the line between driver assistance and self-driving. The engineering analysis adds a concrete, data-driven dimension to that debate: nine documented crashes in similar conditions, backed by federal scrutiny, are harder to dismiss than scattered anecdotal complaints on social media.Investigation Timeline and Extensions
The probe has already gone through procedural adjustments. Tesla was granted extra time to respond to NHTSA’s information requests, reflecting the complexity of compiling software records, test results, and internal safety analyses. The specific deadlines for the engineering analysis phase have not been publicly disclosed, but extensions are not unusual in complicated automotive safety cases, particularly when the subject involves rapidly updated software. That update cycle introduces a unique challenge. Unlike a mechanical component that remains static after manufacture, FSD’s software evolves through frequent over-the-air updates. NHTSA must determine not only whether the versions of FSD active during the nine crashes contained a defect, but also whether subsequent updates have mitigated or exacerbated the underlying risk. Investigators will likely examine how Tesla documents software changes, tests them before release, and communicates new capabilities or restrictions to customers. Regulators also have to decide how to frame any potential remedy. If the problem is tied to low-visibility performance, one option is to require stricter geofencing or environmental limits, such as automatically disabling certain FSD functions when onboard sensors detect fog, heavy rain, or extreme glare. Another is to mandate more robust driver-monitoring systems that can detect distraction or inattention when FSD is engaged, reinforcing its Level 2 status rather than allowing it to be treated as self-driving in practice.What Comes Next
The engineering analysis does not guarantee a recall, but it significantly raises the stakes. At the end of the process, NHTSA could close the investigation with no further action, negotiate a voluntary software update with Tesla, or formally conclude that a safety defect exists and order a recall affecting millions of vehicles. Any of those outcomes will reverberate across the auto industry, where other manufacturers are also racing to deploy increasingly automated driving systems. For Tesla owners, the near-term impact is uncertainty. FSD remains a Level 2 system that requires constant driver supervision, and that legal framing will not change unless and until regulators approve a higher level of autonomy. The investigation underscores that, despite ambitious marketing and rapid software iteration, federal safety oversight still has the final word on how far and how fast automated driving technology can advance on public roads. More from Morning Overview*This article was researched with the help of AI, with human editors creating the final content.