
Tesla’s quiet experiment with a hidden driver-assist setting, widely dubbed “Elon mode,” has now escalated into a full-blown federal investigation that cuts to the heart of how far automation should go on public roads. Regulators are probing whether the company disabled key safety checks in its Autopilot system, and whether that choice fits into a broader pattern of overpromising what its cars can safely do. The outcome will shape not only Tesla’s future, but also how aggressively other automakers push semi-autonomous features.
What ‘Elon mode’ actually changes on the road
At its core, “Elon mode” is about silence where there should be nagging. In standard configurations, Drivers using Autopilot are supposed to keep their hands on the wheel and respond to frequent alerts that check whether they are paying attention. The secret setting reportedly dials those alerts way down or switches them off, allowing the car to steer, accelerate, and brake for extended stretches without the usual prompts to re-engage. That might make long highway drives feel smoother, but it also removes a key barrier that keeps people from treating driver-assist software like a fully self-driving chauffeur.
Regulators are alarmed because this hidden configuration appears to let some owners run Autopilot for long periods without touching the wheel at all, even though the system is not designed to replace a human driver. Reporting on the feature describes how Tesla Autopilot can be used for extended periods without forcing a hand check, which is exactly the kind of behavior safety agencies have been trying to prevent. By burying a mode that relaxes those safeguards, Tesla has invited questions about whether it prioritized driver convenience and product mystique over the basic design principle that automation should keep people engaged, not lull them into checking out.
Why federal regulators are turning up the heat
The federal scrutiny did not come out of nowhere. US highway safety regulators sent Tesla a detailed letter demanding information on how “Elon mode” works, how many cars have it, and who authorized its use, warning that failure to cooperate could trigger fines of up to $26,000 per day. That kind of penalty language is reserved for cases regulators consider especially serious, and it signals that they see the hidden mode as more than a quirky software tweak. The core concern is whether Tesla quietly changed the risk profile of its cars on public roads without giving drivers, or the government, a clear picture of what had changed.
In parallel, safety officials have ordered the company to turn over extensive data on how Autopilot behaves when the usual alerts are suppressed. The request covers how often the system is engaged, how frequently Drivers ignore or miss warnings, and what happens in the moments before a crash or near miss. Investigators are particularly focused on the way “Elon mode” reportedly turns off or delays the standard steering wheel nags and chimes that are supposed to keep people attentive, a shift described in detail in federal requests for extensive data. The investigation is not just about one hidden setting, it is about whether Tesla has been candid about the limits of its automation and the safeguards that are supposed to keep it in check.
A broader safety record under the microscope
“Elon mode” is landing in a context where Tesla’s automated driving record is already under intense review. Federal safety officials are examining roughly 2.9 m Teslas equipped with the company’s Full Self-Driving package, or FSD, looking at how those vehicles behave in real-world traffic and how often they are involved in crashes. Critics argue that the branding of FSD, combined with Autopilot’s marketing, encourages people to overestimate what the systems can safely handle, especially in complex urban environments. When a hidden mode then appears to loosen the few hard constraints that exist, it reinforces the perception that Tesla is pushing the envelope faster than regulators can keep up.
That perception is sharpened by a string of high-profile crashes and lawsuits tied to Autopilot. In Benavides v. Tesla, jurors in Florida awarded more than $240 m in damages, including $200 m in punitive awards, after finding that Tesla bore responsibility for a fatal accident involving its Autopilot system. Another jury verdict in Miami reached $243 m, a $243 million judgment that Tesla is now appealing. Those numbers, alongside the earlier $240 million and $200 million awards in In Benavides, show how juries are starting to treat Autopilot failures not as isolated mishaps but as systemic problems that warrant steep punitive penalties.
From safety questions to potential fraud
The legal exposure is not limited to crash liability. Last October, Tesla disclosed that the Justice Department had asked the company for information about Autopilot and Full Self-Driving, signaling that federal prosecutors were looking beyond engineering questions to how the technology has been marketed and described to investors. Separate reporting indicates that the DoJ is now examining whether Tesla, in promoting its driver-assist systems, may have crossed the line into securities or wire fraud. The inquiry is reportedly focused on whether statements about the capabilities and safety of Tesla Autopilot were misleading in ways that could have influenced both buyers and shareholders.
“Elon mode” fits awkwardly into that picture. If a company is already under investigation for how it has portrayed the safety and autonomy of its vehicles, the discovery of a secret configuration that disables safety alerts raises obvious questions about disclosure and intent. Federal regulators have reportedly pressed Tesla to explain why the mode existed, who had access to it, and whether it was ever used in public demonstrations or internal testing that fed into optimistic claims about the technology. The fact that the feature persisted despite earlier attention from safety advocates, as described in coverage of the federal scrutiny, will likely be central to any argument that the company knowingly downplayed risks.
What this means for drivers, regulators, and the future of Autopilot
For everyday owners, the “Elon mode” saga is a reminder that the most important part of any driver-assist system is still the human behind the wheel. Even as Tesla and other automakers push toward more advanced automation, the current generation of systems is built on the assumption that a person is ready to take over at a moment’s notice. When a hidden setting relaxes the guardrails that keep that person engaged, it undercuts the basic safety model regulators have tried to enforce. That is why federal agencies are not just asking about one software flag, they are reexamining how Autopilot, FSD, and related features are deployed across millions of cars, including the Tesla vehicles already under investigation for self-driving tech incidents.
Regulators, for their part, are under pressure to show they can keep pace with rapid software changes that can alter a car’s behavior overnight. The scrutiny of “Elon mode” is intertwined with a broader review of how Jan and other officials handle emerging automotive technologies, from the initial discovery of the hidden feature to the ongoing probes into Autopilot crashes and FSD performance. As Jan and other policymakers weigh new rules, they will be looking not only at the technical merits of features like “Elon mode,” but also at the culture inside Tesla that allowed such a mode to exist without clear public explanation. In that sense, the investigation is as much about corporate governance and transparency as it is about code.
More from Morning Overview