Morning Overview

No hands, no screen: Tesla’s new AI parks exactly where you say

The National Highway Traffic Safety Administration is investigating Tesla’s remote parking feature, known as Summon, after the system was linked to more than a dozen crashes. The probe, cataloged as engineering analysis PE24033, centers on whether the AI-driven tool can reliably detect and avoid obstacles when a driver commands the car to move without touching the steering wheel or looking at a screen. The case arrives just as NHTSA tightens its crash-reporting rules for automated driving technology, raising pointed questions about whether Tesla met its disclosure obligations.

Federal Probe Targets Summon After Repeated Collisions

NHTSA opened the engineering analysis into Tesla’s remote parking feature after documenting over a dozen crashes involving the system, as reflected in its public record for case PE24033. The incidents suggest a pattern: the car, summoned remotely by a driver standing nearby, allegedly failed to stop for obstacles in its path. That failure mode is exactly the kind of defect NHTSA’s engineering analysis process is designed to surface, and the volume of reported crashes was enough to trigger a formal review of both the software logic and the sensing hardware that underpin Summon.

Tesla’s own documentation limits Summon to use on private property, a restriction highlighted in the company’s manual and reported by the Associated Press. Yet the crashes under review indicate the feature has been used in less controlled settings, including parking lots and areas with pedestrian traffic. That gap between intended use and real-world behavior sits at the center of the federal inquiry. If the system’s AI cannot reliably park where users actually tell it to park, the safety case for hands-free, screen-free summoning weakens considerably, and regulators may decide that software limits or physical safeguards are necessary to keep the feature within safer bounds.

Crash Reporting Rules Tighten at a Critical Moment

The timing of this investigation is significant. NHTSA’s Third Amended Standing General Order 2021-01, effective June 16, 2025, sharpens the definitions automakers must follow when reporting crashes involving automated systems, as detailed in the updated ADAS reporting guidance. The order distinguishes between fully autonomous driving systems and Level 2 advanced driver assistance systems, specifies when such systems are considered “engaged,” and sets clearer timelines for disclosure. Summon, which requires a human to initiate the command but then operates the vehicle without direct physical input, falls squarely within the Level 2 ADAS category under these definitions, making its crash history directly relevant to the new rules.

The amended order matters because it removes ambiguity that manufacturers could previously use to avoid filing reports. Under the broader Standing General Order framework, companies must report crashes that involve injuries or require a vehicle to be towed while an ADAS feature is engaged. The Associated Press reported that the Summon probe includes allegations Tesla did not report certain crashes despite these requirements. If those allegations hold up, the company could face enforcement action beyond the current investigation, because the Standing General Order is not optional guidance. It carries regulatory weight, and the updated version leaves less room for interpretation about what counts as a reportable event, especially when an automated feature is clearly in control of the vehicle’s movement.

The Off-Label Use Problem

A recurring tension in the Summon investigation is the distance between how Tesla describes the feature and how drivers actually use it. The manual designates Summon for private property, essentially driveways and home garages, where traffic patterns are predictable and speeds are low. But the appeal of telling a car to come to you, with no hands on the wheel and no interaction with the touchscreen, is strongest in exactly the places Tesla says not to use it: crowded parking lots, retail centers, and pickup lanes. In those environments, the AI that powers Summon must interpret a complex scene in real time, identifying curbs, pedestrians, shopping carts, and other vehicles without the safety net of a human behind the wheel ready to intervene the instant something goes wrong.

This mismatch creates a regulatory and design dilemma. If crashes happen primarily during off-label use, Tesla can argue the feature performed as designed within its stated limits and that drivers assumed extra risk by ignoring the manual. But NHTSA’s investigation suggests the agency is not satisfied with that framing. The probe’s scope covers whether the system itself has a defect in obstacle detection and response, not merely whether drivers misused it. For owners, that distinction matters: the federal government is asking whether the AI is robust enough to handle the foreseeable ways people actually use the feature, not just the narrow scenarios described in product literature, and the answer could influence how future driver-assistance features are labeled, marketed, and constrained.

Public Data Offers a Window Into Complaint Patterns

NHTSA maintains extensive datasets and APIs that allow anyone to track investigation openings, consumer complaint volumes, and recall campaign records for specific vehicles and features. These tools provide an independent check on manufacturer claims and marketing narratives. For Tesla specifically, the datasets can surface complaint narratives describing Summon failures, offering granular detail about what drivers experienced before, during, and after a crash or near miss. Because the data is standardized and updated regularly, outside analysts can look for patterns in how often Summon is mentioned, what kinds of obstacles are involved, and whether certain models or software versions appear more frequently in complaints.

The datasets also reveal whether complaint volumes are rising or stable, which matters for understanding whether the Summon problem is isolated or growing as more Tesla owners activate the feature. NHTSA uses these complaint trends, alongside its own field investigations and engineering analyses, to decide whether to escalate a preliminary evaluation into a formal recall demand. For Tesla owners, the practical takeaway is straightforward: if you use Summon, the federal safety regulator has flagged specific concerns about the system’s ability to stop for obstacles, and the public data trail is transparent enough that the evolution of this probe—from initial complaints to potential recall—will be visible long before any official notice arrives in the mail or over the air.

What the Probe Means for Hands-Free Parking AI

Tesla’s Summon feature represents one of the most consumer-facing applications of parking AI on the market. The premise is simple and appealing: tell your car where to go, and it drives itself there while you stand at a distance. No hands on the wheel, no eyes on a screen. But the PE24033 investigation exposes a hard truth about deploying that kind of autonomy in unstructured environments. Parking lots are unpredictable. Children dart between cars. Shopping carts roll. Other drivers back out without looking. An AI system that cannot reliably handle those variables is not just a convenience feature with rough edges; it is a potential safety hazard operating in close proximity to vulnerable road users and property.

Whatever conclusion NHTSA reaches, the Summon probe will shape how regulators, automakers, and consumers think about hands-free parking for years to come. If investigators find a defect, Tesla could be required to push software updates that slow the system, tighten obstacle detection thresholds, or restrict where Summon can operate, and other manufacturers offering similar features may preemptively adjust their own systems to avoid similar scrutiny. If NHTSA instead concludes that Summon behaves as designed and that most crashes stem from misuse, the agency may still press for clearer labeling, stronger geofencing, or more conservative default settings. In either scenario, the message is clear: as parking AI moves from novelty to mainstream, the burden will be on automakers to prove not just that their systems work under ideal conditions, but that they remain safe in the messy, imperfect spaces where people actually park and walk every day.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.