Several U.S. states are now deploying artificial intelligence cameras along highways and local roads to detect drivers using handheld phones and ignoring seat belts. Georgia, Minnesota, and Connecticut have each moved at different speeds and with different scopes, but the common thread is a growing reliance on AI-powered imaging to address distracted and unsafe driving, particularly among commercial vehicle operators. The programs raise practical questions about enforcement accuracy, privacy, and whether the technology actually changes driver behavior or simply generates citations.
Georgia Targets Commercial Trucks With Federal Grant Funding
Georgia’s Motor Carrier Compliance Division contracted with the Australian-based technology firm Acusensus and began operating AI camera systems in July 2024, funded through a federal High Priority grant described in the state’s 2024 annual report. The cameras capture in-cab images of commercial motor vehicle drivers, flagging visual evidence of illegal handheld phone use and seat belt non-compliance. The system is narrowly scoped: it applies to commercial vehicles, not passenger cars, and the images serve as a basis for enforcement actions by state troopers rather than automated ticketing.
That distinction matters. Unlike red-light or speed cameras that mail citations to vehicle owners, Georgia’s approach keeps a human officer in the enforcement loop. The AI identifies potential violations, but a trained reviewer decides whether the image warrants action. This design sidesteps some of the legal and political resistance that fully automated traffic enforcement has faced in other states, though it also limits how many violations the system can process at scale.
Georgia officials have framed the technology primarily as a commercial vehicle safety tool, in line with federal priorities to reduce serious crashes involving large trucks. By focusing on professional drivers, the state can argue that operators who spend all day on the road should be held to a higher standard and that targeted monitoring is justified by the potential harm from a distracted truck driver. Still, the same cameras that capture phone use can reveal other details inside cabs, from passengers to personal items, underscoring why civil liberties groups are likely to press for clear retention limits and access controls.
Minnesota’s Highway 7 Pilot Flagged Thousands of Violations
A suburban Minneapolis corridor has become one of the most active testing grounds for the same technology. The South Lake Minnetonka Police Department launched the Acusensus Heads-Up AI enforcement system on Feb. 10, 2025, as part of the Highway 7 Road Safety Coalition’s effort to curb crashes. According to a coalition report posted by the city of Shorewood, the system flagged more than 10,000 possible violations in its first month, a volume that would be physically impossible for patrol officers to match through traditional observation alone.
The coalition, which includes the city of Shorewood and neighboring jurisdictions, describes the AI camera as a tool that detects risky behavior, including cell phone use and seat belt status, and then notifies nearby officers in real time. That officer-notification model mirrors Georgia’s approach: the camera spots the behavior, but a patrol unit makes the stop. The sheer number of flags in a single month, however, raises a question that neither the coalition nor the vendor has publicly answered in available documents. How many of those 10,000 flags led to actual traffic stops, and how many were false positives?
That gap in public reporting is significant. A system that generates thousands of alerts but results in relatively few confirmed violations could overwhelm officers with noise, while one that converts flags into stops at a high rate could face scrutiny over whether drivers are being profiled by algorithm rather than observed by a human. Neither outcome is ideal without transparent data on conversion rates and accuracy.
For drivers along Highway 7, the practical experience is also evolving. Early phases of the pilot have emphasized warnings and education rather than immediate ticketing, with local officials stressing that the goal is to change behavior, not just issue citations. As the program matures, residents and civil liberties advocates are likely to push for regular public reports that break down how many alerts led to stops, what proportion resulted in tickets, and whether certain groups or vehicle types are disproportionately targeted.
Peer-Reviewed Research Describes How the Technology Works
A study published in the journal Traffic Injury Prevention offers the most detailed independent look at the Acusensus system’s technical approach. The peer-reviewed paper explains how the firm’s “Heads-Up Solutions” AI conducts naturalistic observation of commercial motor vehicle driver behaviors. The system relies on visual cues, specifically whether a driver appears to be holding a phone and whether a seat belt is visible across the torso.
The research team describes a pipeline in which high-resolution roadside cameras capture images of passing vehicles, software isolates the driver compartment, and machine-learning models classify whether a phone or seat belt is present. Only images that meet a confidence threshold are passed on for human review. The paper emphasizes that humans remain responsible for final determinations, but the AI dramatically narrows the field of images that need to be checked.
The study frames the technology as a data-collection method rather than a direct enforcement tool, which is an important distinction. Naturalistic observation means the cameras record behavior as it occurs in real driving conditions, without the driver necessarily knowing they are being watched. That methodology produces a more accurate picture of how often violations actually happen compared to self-reported surveys, which tend to undercount risky behavior. But it also means drivers are being photographed inside their vehicles without consent, a fact that privacy advocates in multiple states have flagged as a concern even when the images are used only for research.
Researchers argue that the safety benefits could be substantial if the data are used to tailor enforcement and education to high-risk corridors, times of day, or driver groups. Yet the same granular data could be tempting for secondary uses, from civil litigation to workplace monitoring by trucking companies. The paper notes that governance and policy choices, rather than technical limitations, will ultimately determine how intrusive these systems become.
Minnesota Backs AI With State-Level Data Grants
The local Highway 7 deployment does not exist in a vacuum. The Minnesota Department of Public Safety has separately funded a grant allowing its Office of Traffic Safety to analyze distracted driving data more deeply. The grant supports measurement and enforcement strategies, including data initiatives that feed information to local agencies and safety partners across the state.
This two-tier structure, where a state agency funds the analytical backbone while local departments run the cameras, suggests Minnesota is building toward a broader rollout rather than treating Highway 7 as an isolated experiment. If the Office of Traffic Safety can demonstrate that AI-flagged data correlates with measurable crash reductions along the corridor, the political case for expanding the technology to other high-risk roads becomes much stronger. Without that outcome data, though, the program risks looking like an expensive surveillance exercise with no proven safety dividend.
The grant language also hints at a feedback loop: as AI systems generate more detailed information about when and where distracted driving occurs, state analysts can refine enforcement campaigns, which in turn may justify additional technology deployments. That dynamic could accelerate adoption, but it also raises the stakes for getting the privacy and transparency pieces right from the outset.
Connecticut Opens the Door to Automated Enforcement
Connecticut has taken a different path. The state’s Department of Transportation released formal guidance for municipalities on automated traffic enforcement, acting under the authority of Public Act 23-116 and outlining how local governments can opt in to use camera-based systems. The guidance focuses on implementing red-light and speed cameras in designated safety zones, with requirements for signage, public notice, and data handling.
Municipalities interested in participating must apply through the state’s online procurement portal, where projects are reviewed for compliance with statutory limits and technical standards. Connecticut’s framework does not yet extend to phone or seat belt detection, which means the state is several steps behind Georgia and Minnesota on AI-powered behavioral monitoring. Still, the fact that Connecticut has created a statewide legal infrastructure for automated enforcement makes it easier to add new capabilities later, including AI-based distracted driving detection, if lawmakers decide the benefits outweigh the risks.
For now, Connecticut’s experience may serve as a test of how far residents are willing to accept camera-based enforcement in exchange for promised safety gains. The state’s rules require that revenue from tickets be directed toward transportation safety, an attempt to blunt criticism that automated systems are primarily about generating money. Whether that assurance will satisfy skeptics remains to be seen, especially if future expansions move from measuring speed and red-light compliance to monitoring behavior inside vehicles.
Balancing Safety, Privacy, and Public Trust
Taken together, the three states illustrate a spectrum of approaches to AI traffic enforcement. Georgia is using AI to target a specific sector (commercial trucks), under federal safety mandates. Minnesota is layering local pilots on top of a statewide data strategy that could eventually support broader deployment. Connecticut is building the legal scaffolding for automated enforcement first, with the option to adopt more advanced AI tools later.
The common questions are less about whether the technology works in a narrow technical sense and more about how it is governed. Programs that rely on human officers to confirm violations may be more palatable than fully automated ticketing, but they still raise concerns about constant surveillance and potential bias in which flagged vehicles are actually stopped. Without regular public reporting on alert volumes, stop rates, citation outcomes, and crash trends, residents have little way to judge whether AI cameras are delivering safety benefits proportionate to their intrusiveness.
As more jurisdictions consider similar systems, the experiences of Georgia, Minnesota, and Connecticut suggest several benchmarks for public accountability: clear limits on how long images are stored and who can access them; independent evaluations of accuracy and bias; and transparent evidence that deployments are tied to measurable reductions in crashes and injuries. Absent those safeguards, AI traffic cameras risk being seen not as tools for safer roads, but as another layer of opaque surveillance watching drivers from the roadside.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.