Image Credit: No Swan So Fine - CC BY-SA 4.0/Wiki Commons

A Tesla owner recently posted a breathless video praising Full Self-Driving after the software abruptly yanked his car off a clogged highway and into a grassy median, a maneuver that looked more like a stunt than a safety feature. The clip captured the strange dissonance at the heart of Tesla fandom: a system that can behave in startling, arguably dangerous ways, yet still inspires near-religious devotion from some of the people testing it on public roads. I see that tension running through the broader story of FSD, where genuine technical progress coexists with unresolved safety questions and a regulatory dragnet that is tightening around the brand.

To understand why a driver might cheer as his vehicle veers toward the grass, it helps to look at the culture that has grown up around Tesla’s software, the incentives that shape how fans talk about it, and the mixed record of the technology itself. The highway median incident is not an isolated curiosity, it is a vivid example of how marketing, community pressure, and selective storytelling can warp our sense of what counts as “safe enough” when a computer is steering two tons of metal at speed.

The viral median maneuver and the psychology of praise

In the now viral clip, a Tesla owner filming from the driver’s seat watches as his car, running on Full Self-Driving, suddenly cuts across lanes and heads straight toward the center of a divided highway. Instead of braking in alarm, he exclaims in delight as the vehicle settles into the middle of a flat grass strip, narrowly avoiding obstacles and bypassing a long line of stopped traffic. The maneuver, as one analysis put it, was “totally pointless,” because the median was not a designated lane and any barrier or ditch would have turned the move into a crash, yet the driver gushes that FSD “saved” him from congestion while the software itself offers no sign it recognizes the risk it just created, a sequence captured in detail in the original highway median video.

Another account of the same episode notes that the dramatic intervention reveals a vista of bumper-to-bumper traffic stretching ahead, which helps explain why the driver was so eager to celebrate the move as a clever shortcut rather than a near miss. In that retelling, the Tesla ends up in the center of the grass median, with the owner still praising the system even as it sits in a place no human driver would consider a legal or sensible stopping point, a reaction described in the follow up coverage. I see in that response a powerful example of confirmation bias: once someone has invested money, time, and personal identity in being an early adopter, they are primed to interpret even alarming behavior as proof of brilliance rather than a warning sign.

Inside the Tesla influencer ecosystem

The median incident did not emerge from a vacuum, it sits inside a thriving ecosystem of Tesla influencers who build audiences and sometimes income streams by showcasing the most dramatic moments of FSD in the wild. One prominent example is Tesla influencer Justin Demaree, known online as Bearded Tesla Guy, who attempted a cross-country trip in his 2023 Model Y using the software and ended up with a spectacular failure when the car went airborne after misjudging a railroad crossing. In that case, the vehicle neither braked nor swerved on its own, a sequence documented in a Shocking Incident Caught On Camera that undercut the narrative of flawless autonomy but still generated clicks and attention.

Another livestream, this time by a different Tesla owner, ended with a crash while the driver was demonstrating the car’s “Full Self-Driving” features for an online audience. The video shows the driver narrating the system’s behavior right up until impact, then panning across the aftermath, which quickly went viral and was later dissected in detail, including the timestamp “11:43” that marked the moment things went wrong, in a report on how a Tesla driver crashes during livestream. When I watch these clips, I see not just risky driving, but a feedback loop where the most extreme footage, whether triumphant or disastrous, is rewarded with views, reinforcing a culture that treats public roads as test tracks and audiences as co-pilots.

Life-saving narratives and the allure of redemption

Alongside the crash reels and near misses, there is a parallel genre of FSD storytelling built around salvation. In one widely shared video, a driver named Aaron recounts how FSD took over during a medical emergency, navigating through traffic, making a critical turn, finding an open parking spot, and backing in, all while he says he never touched the wheel. The clip, which presents the software as a guardian angel that handled the entire trip without human input, has been promoted as proof that the technology can literally save lives, a claim that rests heavily on Aaron’s account in the Driver says Tesla FSD saved his life video.

Another recent story involves a Cybertruck owner who credits Tesla FSD with saving his life after the system reacted in a fraction of a second to avoid a collision. That account emphasizes that, despite its name, the software is officially classified as an SAE Level 2 system, which means drivers must remain attentive and ready to take over at all times, even when the car appears to be handling everything. The owner’s description of how the truck responded in what he calls a matter of a second or less is used to argue that human reflexes alone might not have been enough, a framing laid out in a piece explaining how a Cybertruck owner says Tesla FSD saved his life. I find these redemption narratives powerful, but they also risk overshadowing the mundane reality that most miles driven on FSD are uneventful, and that both good and bad outcomes can hinge on how closely the human behind the wheel is paying attention.

Regulators, lawsuits, and the hard edge of accountability

While fans trade stories online, regulators have been amassing their own dataset, and the picture they see is far less romantic. Federal safety officials have opened a sweeping probe into nearly 2.9M Tesla cars over traffic violations linked to the company’s self-driving system, a figure that underscores how widely the software has been deployed on public roads. That investigation is focused on reports that vehicles using the technology have been involved in incidents where they failed to obey traffic controls, a concern spelled out in a report on how Nearly 2.9M Tesla cars are now under scrutiny.

In a separate thread of accountability, a jury recently awarded $129 million in compensatory damages and $200 million in punitive damages in a case involving a crash where Tesla was found partly responsible, then assigned Tesla 33% of the fault. That verdict, which sits alongside a very different story of a driver who reportedly completed a coast-to-coast FSD trip without human intervention, highlights how the same technology can be framed as both marvel and menace depending on the outcome. The legal record, including the exact figures of $129 m, $200 m, $129 million, $200 million, and the 33% share of liability, is detailed in coverage of how a jury ruled After Tesla was sued, and it serves as a reminder that courts and regulators are less swayed by fan enthusiasm than by crash reports and statutory duties of care.

Progress, limits, and what “best” really means

None of this is to say that FSD is a static or trivial piece of software. Some testers who have spent years with the system argue that the latest supervised version is the most capable driver-assistance package on the market, with smoother lane changes, better handling of complex intersections, and a more human-like driving style than earlier iterations such as Enhanced Autopilot. One detailed evaluation even concludes that, if you have been burned by a previous version, the current release may finally deliver on the promise of a genuinely helpful co-pilot, describing Tesla FSD (Supervised) as a standout among driver aids.

At the same time, other real-world tests keep exposing the system’s blind spots. A separate Model Y crash, also captured on video, showed how FSD struggled with a complex environment and ended up highlighting what one analysis called its biggest weakness, even as the duo attempting the trip had hoped to showcase a Level 3 self-driving system. That episode, which unfolded during a long-distance journey and was later dissected in a piece on how a Tesla Model Y crash exposes the gap between marketing and reality, reinforces my view that calling any Level 2 system “Full Self-Driving” invites confusion about its true capabilities and limits.

Between miracle and mishap, a culture still in beta

There is a quieter side to the story that rarely goes viral. Some drivers report that FSD has helped them avoid collisions in ways that feel almost uncanny, such as a case where Tesla’s misleadingly named Full Self-Driving apparently saved the life of a man named Clifford by reacting faster than he could. That incident is cited as an example of how good outcomes rarely get the same attention as failures, even though they may be just as real, a point made explicitly in an analysis of how most people mock autonomous driving tech until it delivers a dramatic save. I think that asymmetry in attention helps explain why the public conversation can feel so polarized, with one side trading horror stories and the other clinging to tales of miraculous escapes.

Regulators, for their part, are trying to thread a needle between allowing innovation and responding to mounting evidence of risk. One investigation describes how Tesla was hit with a probe after crashes involving a self-driving feature that Musk has boasted about, and notes that the company responded with an over-the-air update to its software, a pattern that suggests a product still very much in beta even as it operates at scale. That tension is captured in reporting on how Tesla hit with probe over its self-driving feature, and it leaves me thinking that the real story is not whether FSD is good or bad, but whether the culture around it is capable of treating every median swerve, every airborne crossing, and every life-saving dodge as data points rather than as fodder for fandom.

Even some of the most enthusiastic reviewers acknowledge that history. One assessment aimed at skeptical owners opens by conceding that if you have been burned by a previous version of FSD or its earlier system, Enhanced Autopilot, the distrust is understandable, before arguing that the latest supervised release is a different animal. That framing, which appears in a closer look at how If you’ve been burned by earlier software you might still give FSD another chance, captures the core dilemma: Tesla’s technology is evolving quickly, but so is the body of evidence about its failures, and the gap between those two curves is where drivers, regulators, and the rest of us are now forced to live.

More from Morning Overview