The U.S. Department of Agriculture is moving toward adopting Grok, the AI chatbot built by Elon Musk’s xAI, for internal government work. The agency scheduled a formal stakeholder meeting with xAI, and a ready-made federal purchasing channel already exists to close a deal. But the courtship is drawing scrutiny for a specific reason: independent testing by Reuters found that Grok’s image generator produced sexualized images of people even when told the subjects had not consented, and those failures persisted after xAI had already applied safety updates.
The USDA has not publicly explained how it would address those documented risks, or whether its intended use of Grok would involve the image generation features at the center of the safety concerns. Neither the USDA nor xAI responded to requests for comment from the outlets that have reported on the matter, and no public statement from either party addresses the specific safety findings.
The procurement pipeline is already open
Grok does not need a special invitation to enter federal offices. The General Services Administration’s Buy AI program, a centralized marketplace designed to speed up agency access to artificial intelligence tools, lists “xAI: Grok for Government Teams” at a listed price of $0.42 per unit. The GSA listing does not define what constitutes a “unit” in this context. GSA pricing for software-as-a-service products is typically structured per user per month or per API call, but the Buy AI page does not specify which model applies here, making the real cost of a deployment difficult to assess from the listing alone.
The listing is available to every federal agency through March 2027 and allows purchases without a separate competitive bidding process. Contract terms, pricing, and basic eligibility requirements are pre-established. However, an important distinction applies: appearing on the Buy AI marketplace is not the same as receiving agency-level authorization to use a product. Each federal agency maintains its own IT governance and security review process, and a tool listed on Buy AI must still clear those internal gates before it can be deployed on agency systems.
That streamlined procurement path does lower the barrier to adoption. For an agency like the USDA, it means the bureaucratic distance between “interested” and “purchased” is shorter than it would be under a traditional procurement. The GSA page notes that products on the platform are either FedRAMP-authorized or actively pursuing that authorization, a cloud-security certification the government requires for services that handle federal data. However, the listing does not specify which category Grok currently falls into. The FedRAMP Marketplace, where authorization status is publicly tracked, would be the place to confirm that detail, and the distinction matters: a product still working through the process has not yet passed the same third-party security assessments as one that has completed it.
What the USDA meeting tells us, and what it does not
The USDA’s own website confirms the stakeholder meeting with xAI, hosted by the department’s Office of Communications. That is a first-party government record, not a leak or a rumor. It establishes direct, official contact between the agency and the company and signals active interest in the technology. The calendar listing does not include a specific date for when the meeting occurred or was scheduled to take place, which is a basic reporting gap that remains unresolved in the public record as of May 2026.
What the listing does not establish is whether a contract has been signed, a pilot program launched, or Grok deployed on any USDA system. In federal procurement, those steps typically generate their own paper trail: contract awards, task orders, entries in public spending databases. None of those records have surfaced in connection with Grok at the USDA as of May 2026.
The scope of any potential use is also unresolved. Federal agencies apply AI tools to tasks ranging from customer service chatbots to data analysis to document summarization. Whether the USDA is interested in Grok’s text capabilities, its image generation, or both has not been disclosed. That distinction carries real weight, because the documented safety failures are concentrated in the image generation pipeline. A text-only deployment would present a different risk profile, though concerns about hallucinations, bias, and factual accuracy in text outputs remain relevant for any government application.
The safety record that shadows the deal
In February 2026, Reuters staff tested Grok’s image generation under controlled conditions and found that the tool at times produced sexualized images even when explicitly told that the depicted subjects had not given consent. Critically, this testing took place after xAI had already rolled out safety updates intended to curb exactly that kind of output. The results were reproducible, not a one-off glitch.
That reporting represents the strongest publicly available evidence of the specific content-safety failures that critics cite when questioning whether Grok belongs in government systems. xAI has publicly stated that it applies content filters and safety updates, but the Reuters findings showed those measures did not fully prevent harmful outputs. Notably, the Reuters report did not include a direct response from xAI addressing the specific test results, and no subsequent public statement from the company has addressed the findings in detail. Whether xAI has made additional changes since that testing, or whether the government-specific version of Grok ships with stricter guardrails than the consumer product, has not been confirmed by any public source.
The gap matters. Without transparency about which model version the government would receive, what configuration differences exist, or whether custom safety layers have been built for federal clients, outside observers have no reliable way to judge whether the risks Reuters documented still apply in full.
Federal AI policy is supposed to prevent this kind of ambiguity
The federal government is not operating without rules on AI adoption. The Office of Management and Budget issued guidance (Memorandum M-24-10) directing agencies to implement concrete safeguards before deploying AI, including impact assessments, risk management practices, and public transparency measures. Agencies are expected to evaluate AI tools for potential harms before putting them into production, not after.
Whether the USDA has conducted that kind of internal evaluation for Grok is unknown. The department has not described any red-teaming, risk assessment, or safety review it may have undertaken to supplement the external testing. Nor has it addressed how the documented image-generation failures would factor into a go or no-go decision.
The broader federal acquisition framework gives agencies considerable flexibility in how they buy technology. Contracts can take the form of firm-fixed-price deals, time-and-materials arrangements, or indefinite-delivery/indefinite-quantity vehicles that allow task orders to be issued as needs arise. The Small Business Administration’s guidance on types of contracts illustrates how flexible these mechanisms can be. That flexibility can accelerate access to emerging tools but also makes it harder for the public to track exactly when a product moves from consideration to active deployment.
Other agencies and Congress have been largely silent
As of May 2026, no other federal agency has publicly confirmed adopting Grok through the Buy AI marketplace, though the listing makes it available to all of them. Congressional oversight on the specific question of Grok in government systems has also been minimal. No committee hearing, formal inquiry, or public letter from lawmakers addressing the USDA’s engagement with xAI or the broader availability of Grok to federal buyers has entered the public record. That silence leaves a gap in the accountability chain: the procurement channel is open, documented safety problems exist, and the legislative branch has not yet weighed in.
The USDA’s engagement with xAI is unfolding against a larger push across the federal government to integrate generative AI into agency operations. The pressure to adopt cutting-edge tools that promise efficiency gains is real, and agencies that move slowly risk falling behind. But the obligation to avoid technologies capable of producing harmful or rights-violating outputs is equally real, and it is written into existing policy.
Right now, the public record on the USDA-xAI relationship consists of a procurement listing, a meeting calendar entry, and investigative reporting that found serious safety shortcomings in the product under consideration. The missing pieces are the internal risk assessments, configuration details, and procurement documents that would show whether the department has a plan to reconcile those facts, or whether it is moving forward without one. If the USDA does formalize a Grok deployment, it will set an early and visible precedent for how federal agencies handle AI tools with known safety deficits, and other departments watching the Buy AI marketplace will take their cues from what happens here.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.