Morning Overview

“It’s smarter than you think”: Tesla full self driving finally exposed

Tesla’s “Full Self-Driving” software has been sold to consumers as a system on the verge of true autonomy, but a trail of regulatory actions, federal investigations, and fatal crash findings tells a different story. The gap between what Tesla’s marketing has promised and what the technology actually delivers has drawn enforcement from California regulators and sustained scrutiny from federal safety agencies. What emerges from the primary record is not a system that is smarter than drivers think, but one whose real-world limits have been obscured by branding that overstates its capabilities.

California Forces Tesla to Drop Misleading Names

The California Department of Motor Vehicles pursued an administrative case against Tesla over the company’s use of the terms “Autopilot” and “Full Self-Driving,” arguing that the names misled consumers about what the software could actually do. After a hearing held from July 21 to 25, 2025, an administrative law judge issued a proposed decision on November 20, 2025. The state’s final word came on December 16, 2025, when the agency announced that Tesla had taken corrective action to avoid a suspension of its dealer license, as described in the DMV’s own enforcement summary. Under that agreement, Tesla avoided harsher penalties only by committing to change its marketing to address the DMV’s concerns and to clarify the limitations of its driver-assistance features.

That outcome matters beyond California because it established an official regulatory finding that Tesla’s branding could mislead consumers. The state did not merely issue a warning or request voluntary changes; it built a formal administrative record through a multi-day hearing, a judicial recommendation, and a binding decision. Tesla’s corrective action was taken under legal pressure, not as a goodwill gesture. For buyers who paid thousands of dollars for a software package partly on the strength of its name, the broader California government effectively confirmed that the product’s labeling and marketing could overpromise what the technology delivered. The corrective action raises a direct question for current and prospective owners: if the company itself had to revise its marketing to satisfy regulators, how much confidence should drivers place in the system’s real-time performance?

A Fatal Crash Revealed the System’s Core Weaknesses

The regulatory case did not emerge in a vacuum. Years before California acted, a fatal crash on March 23, 2018, in Mountain View, California, exposed the kind of failure that misleading marketing can set up. A Tesla SUV operating with partial driving automation struck a highway crash attenuator, killing the driver. The federal investigation into that crash, catalogued as NTSB case HWY18FH011, identified a combination of system limitations, driver distraction and overreliance on automation, and ineffective monitoring of driver engagement as the probable cause. Investigators found that the vehicle steered out of its lane toward a gore area, accelerated into the barrier, and did not detect the impending collision in time to avoid it.

Each element of that finding points to a feedback loop that remains relevant today. The system had operational limits it could not communicate clearly to the driver, particularly around lane-keeping in complex roadway geometries. The driver, trusting the software’s branded promise of near-autonomy, paid less attention to the road than manual driving would have required, reportedly interacting with a mobile device in the moments before impact. And the car’s own monitoring tools failed to detect or correct that inattention before the collision, even though the driver’s hands were off the wheel for extended periods. The NTSB issued safety recommendation H‑19‑013 along with related recommendations urging stronger safeguards, stating that the Mountain View crash was not an isolated malfunction but a predictable consequence of a system that encouraged the very complacency it could not safely accommodate.

Federal Watchdogs Urged Action Long Before 2025

The NTSB did not stop at issuing recommendations tied to a single crash. In a public statement, NTSB Chair Jennifer L. Homendy directly connected the board’s safety recommendations to NHTSA’s defect-investigation activity involving Tesla. Homendy’s remarks urged NHTSA to evaluate Autopilot’s limits, assess risks from foreseeable misuse, examine operational design domain boundaries, and push for better driver-monitoring protections. The statement made clear that the NTSB viewed federal oversight as lagging behind the known risks of partial automation, especially when marketed in ways that could mislead ordinary drivers about how much attention the system still requires.

That pressure contributed to changes in how the federal government tracks automation-related incidents. In 2021, NHTSA issued Standing General Order 2021‑01, which created new crash-reporting rules for both fully automated driving systems and SAE Level 2 advanced driver-assistance systems. The order requires manufacturers and operators to report qualifying crashes within defined timelines, and NHTSA makes downloadable incident data available to the public. Yet the existence of a reporting framework does not automatically close the gap between data collection and enforcement. The agency’s own supporting documentation acknowledges limitations in the data, including underreporting and inconsistent detail, meaning the federal government is still working with an incomplete picture of how often and how seriously these systems fail on public roads.

Red-Light Violations Trigger a New Federal Probe

Even with reporting mandates in place, Tesla’s FSD continued to draw federal attention for specific, observable failures. In 2025, a new safety probe was opened into Tesla’s Full Self-Driving software over alleged incidents of red-light running and other traffic violations, as described in a national news investigation. Running a red light is not an ambiguous edge case or a rare sensor glitch; it is a basic rule of the road that any system marketed as “self-driving” should handle without fail. The investigation signals that regulators are no longer content to wait for fatal outcomes before acting and are instead examining whether the software violates traffic law during routine operation, potentially endangering pedestrians and cross traffic even when collisions are narrowly avoided.

This probe also challenges a common defense of FSD: that the system improves with each over-the-air update and that early problems will be trained away over time. If the technology is still running red lights after years of real-world data collection and iterative updates, the learning-curve argument begins to look less like a safety strategy and more like a justification for live testing on public roads. Regulators now have years of crash reports, defect complaints, and incident videos to examine, alongside formal reporting under NHTSA’s standing order, and the red-light investigation suggests they are starting to treat repeated traffic-law violations as a defect pattern rather than an acceptable byproduct of innovation. For drivers, the message is that “beta” labels and promises of future improvements do not erase the legal and physical consequences of a system that can blow through a signal at full speed.

A Pattern of Overpromise and Under-Protection

Viewed together, the California enforcement action, the Mountain View fatal crash, NTSB’s public prodding of NHTSA, and the red-light probe outline a consistent pattern. Tesla’s marketing has encouraged drivers to see FSD and Autopilot as steps toward full autonomy, even as official investigations have documented basic shortcomings in lane keeping, driver monitoring, and adherence to traffic signals. Regulators at both the state and federal levels have been slow to respond, but when they have acted, their findings have repeatedly confirmed that the technology is less capable—and more fragile—than the branding suggests. The result is a system in which ordinary consumers are asked to manage complex automation risks that even professional investigators and engineers continue to debate.

For policymakers, the record raises uncomfortable questions about how to regulate software that is updated continuously yet sold under names that imply stable, finished capability. California’s requirement that Tesla abandon the “Autopilot” label in marketing materials may become a template for other jurisdictions that want to rein in exaggerated claims without banning specific technologies outright. At the federal level, NHTSA’s crash-reporting orders and defect probes show that regulators are slowly building the tools needed to see patterns across thousands of incidents, but they also underscore how far the system has to go. Until oversight catches up with marketing, drivers remain the last line of defense against software that can fail in predictable ways while still being advertised as almost, but not quite, self-driving.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.