
The Trump administration is moving to put Artificial Intelligence not just in the tools agencies use, but in the very text of federal rules. Transportation officials are preparing to lean on Google’s Gemini system to draft regulations, while Republican lawmakers and the White House push a broader framework that favors national control of AI policy over state experiments. The result is a fast-forming experiment in government by algorithm, with supporters promising efficiency and critics warning that public safety and civil liberties are being handed to opaque code.
At the center of the fight is the Department of Transportation, where staff describe a plan to let a commercial chatbot generate the first cut of rules that govern everything from airline safety to autonomous vehicles. Around that initiative, President Donald Trump’s allies are building a legal and political scaffolding, from new legislation to The Executive Order on state AI laws, that could lock in machine-written policy as a defining feature of federal governance.
The Trump DOT’s leap toward AI‑written rules
Inside DOT, officials are preparing to feed complex policy questions into Google’s Gemini system and ask it to “write the perfect rule on XYZ,” as one internal description put it, effectively inviting a chatbot to draft federal transportation regulations before humans edit them. The Trump Administration Plans to Write Regulations Using Artificial Intelligence, and the Department of Transportation, or DOT, has become the test bed for that ambition, with The Trump administration signaling that AI should help agencies work “better and faster” on dense rulemakings that can otherwise take years to complete. According to one account, the plan was presented on a Monday briefing as a way to cope with staff shortages and mounting regulatory backlogs, positioning Gemini as a kind of automated policy aide rather than a mere research tool, even as some career staff quietly questioned whether a system trained on the open internet should be trusted with the first draft of binding safety rules.
The scope of what is at stake is hard to overstate, since DOT’s rules touch virtually every facet of transportation safety, including regulations for airlines, railroads, trucking fleets, pipelines, and the emerging world of self-driving cars. Reporting on the Trump DOT Plans to Use Google Gemini AI to Write Regulations notes that the agency has had a net loss of nearly a fifth of its policy staff in recent years, a drain that helps explain why political leaders are so eager to automate parts of the process. Yet the same accounts describe internal unease, with some employees alarmed that a commercial model, trained and tuned outside government, could shape the language that determines how many hours a truck driver may stay on the road or what redundancy is required in a Boeing 737 MAX flight control system, long before the public ever sees a draft in the Federal Register.
Inside the Gemini experiment and staff backlash
From what I can piece together, the Gemini rollout at DOT is not a vague pilot but a structured experiment in which staff are shown how to prompt the system with specific regulatory questions and then refine its output. One presentation described how Jan demonstrations walked employees through asking Gemini to generate full rule texts, which humans would then review and edit, effectively flipping the traditional model where lawyers and economists draft language and only later use software for citation checks or formatting. Accounts of those sessions say the presenter framed Gemini as a way to “write the perfect rule on XYZ,” a phrase that has since circulated among skeptical staff as shorthand for a plan that seems to treat AI as a coequal author of federal law rather than a glorified spellchecker, and that framing has become a flashpoint for internal debate about whether the Trump Administration Plans to Write Regulations Using Artificial Intelligence is moving too far, too fast.
Those concerns are not limited to anonymous grumbling. DOT’s own former acting chief AI officer, Mike Horton, has publicly criticized the idea of letting a commercial chatbot draft rules, warning that it “seems wildly irresponsible” to rely on a system whose training data, biases, and failure modes are not fully transparent to the agency. In one account of the internal rollout, a staffer recalled leaving a demonstration unsettled by how casually the presenter described handing Gemini the job of writing regulatory text, a sentiment echoed in outside commentary that notes how these developments have alarmed some at DOT. Critics inside and outside the building argue that even if humans retain formal signoff, the first draft often sets the frame for what is politically and legally thinkable, which means ceding that step to Gemini could tilt outcomes in ways that are hard to detect or reverse.
A broader Trump strategy to centralize AI power
The Gemini experiment does not exist in a vacuum, it sits inside a broader Trump Executive Orders Shape Federal AI Regulation and Override State Actions strategy that aims to keep control of AI policy in Washington and, more specifically, in the hands of President Donald Trump and his appointees. Over the past several months, the President has signed directives that promote what the White House calls Promoting Unbiased AI in Federal systems, while also instructing agencies to favor innovation and “minimally burdensome” rules, a phrase that reappears in The Executive Order that restricts state regulation of artificial intelligence. One order launches an AI Litigation Task Force inside the Department of Justice, with instructions to scrutinize state and local AI rules that might violate the First Amendment or conflict with federal priorities, effectively putting cities and states on notice that aggressive algorithmic accountability laws could be preempted or challenged in court.
Another directive, known as the Trump Executive Order Targets State AI Laws, directs The White House and the Executive Order apparatus to coordinate challenges to state statutes that the administration views as onerous for developers, and it explicitly calls for carving certain sectors out from federal preemption efforts to preserve national security and critical infrastructure oversight. The Executive Order explained that the Administration must act with the Congress to ensure that there is a minimally burdensome national framework for AI, and it instructs DOJ to “challenge” AI rules in states that the White House deems too strict or too fragmented. In practice, that means the same political team that is encouraging DOT to let Gemini draft rules is also working to limit how much California, New York, or Colorado can do to set their own guardrails on automated decision systems, consolidating power over AI governance in the federal executive branch.
Congressional Republicans push AI for deregulation
On Capitol Hill, Republican lawmakers are moving in parallel with the administration, promoting legislation that would use AI to comb through the Code of Federal Regulations in search of rules to weaken or repeal. One proposal backed by Jan Republicans would direct agencies to deploy machine learning tools to identify redundant or outdated regulations, which after being identified could be amended, rescinded, or replaced in bulk, a process that supporters say is necessary to modernize a sprawling rulebook that businesses have long complained is confusing and costly. The sponsors, including Rep. Aaron Bean of Florida, argue that AI is uniquely suited to spotting overlaps and contradictions across thousands of pages of text, and they frame the effort as a way to free up human staff for higher value work while letting algorithms handle the drudgery of cross-referencing and impact analysis.
More from Morning Overview