Elon Musk urged X users to upload their medical records to Grok, his xAI chatbot, pitching it as a tool capable of interpreting health data. The chatbot itself, however, carries no medical license, and European regulators are now investigating whether the platform even had the legal right to use public posts for AI training. The collision between Musk’s ambitions for Grok and the guardrails designed to protect sensitive personal information has created a real-world test of how far AI companies can push before privacy law pushes back.
Musk Promotes Grok as a Health Interpreter
In a series of posts on X, Musk positioned Grok as a competent tool for reading and analyzing medical information. He encouraged users to share lab results, imaging reports, and other health documents directly with the chatbot, suggesting it could offer useful interpretations. The pitch was bold: skip the waiting room, feed your data to AI, and get answers fast. Georgetown Law’s Tech Institute documented these claims and flagged the risk they carry.
The problem is straightforward. While Musk has held out Grok as being competent to interpret medical information, he has made no claims that Grok is a licensed medical service. That distinction matters because practicing medicine without a license is illegal in every U.S. state, and offering diagnostic interpretations, even through software, can cross that line. A chatbot that tells a user their blood panel looks abnormal is not just answering a question. It is providing a service that licensed professionals are trained and regulated to deliver. Musk’s framing treats Grok as a convenience tool, but the legal system treats medical advice as a category with strict entry requirements.
No License, No Accountability
The gap between what Musk advertises and what Grok actually is creates a specific kind of danger for users. Someone who uploads a chest X-ray to Grok and receives a reassuring response may delay seeing a doctor. Someone who gets an alarming interpretation may panic unnecessarily. In either case, the user has no recourse. Grok is not bound by malpractice standards, does not carry insurance, and cannot be held to the same accountability framework as a physician or nurse practitioner. As Georgetown’s analysis noted, licensed services are, by definition, paid and regulated, and Grok fits neither description.
This is where most coverage of the story stops, treating the licensing question as a theoretical risk. But the practical consequences are already in motion. When Musk asked users to upload health data, some did. Those records, once inside X’s infrastructure, become subject to the platform’s data policies rather than to the health privacy frameworks that govern hospitals and clinics. In the United States, the Health Insurance Portability and Accountability Act protects medical records held by covered entities like doctors and insurers. X is not a covered entity. A user who voluntarily uploads a lab report to a social media platform has, in effect, moved that data outside the protective perimeter of health privacy law.
Europe Opens a Formal Inquiry
European regulators moved faster than their American counterparts. The Irish Data Protection Commission, which oversees X’s European operations because the company’s regional headquarters sit in Dublin, opened an inquiry into whether personal data in publicly accessible posts from European users on X was lawfully processed to train Grok’s large language models. The investigation is not limited to medical data; it covers the broader question of consent and lawful basis for processing under the General Data Protection Regulation.
The GDPR provides for significant penalties when companies process personal data without a valid legal basis. Fines can reach up to four percent of a company’s global annual revenue, a figure that for xAI and its parent ecosystem could be substantial. The Irish inquiry signals that European authorities view the training of AI on user posts as a live enforcement priority, not a hypothetical concern. For users who posted health-related content publicly on X, the investigation raises a specific question: did they consent to having that information fed into an AI model, and if not, what remedy do they have?
X’s Privacy Policy and Its Limits
X’s own privacy policy states that the company will not sell user data to a third party. That language, reported by The New York Times, offers a narrow reassurance. It does not address whether X can use that data internally to train its own AI products, which is a different question entirely. Sharing data with a third party and using data to build a proprietary model are distinct activities, and the privacy policy appears to prohibit only the former.
When The New York Times sought comment from X about how uploaded health data would be handled, the company did not respond. That silence is telling. A company confident in its data practices would typically welcome the chance to explain its safeguards. The lack of response leaves users without a clear answer about whether their medical records, once uploaded to Grok, are stored, used for further model training, or accessible to xAI employees. The xAI privacy policy itself was flagged as outdated as of November 2024, according to references tied to European Commission data protection authorities, raising additional questions about whether the company’s stated protections reflect its current practices.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.