LinkedIn has been feeding user-generated content into its artificial intelligence training systems, and a toggle the company added to account settings does not undo what has already been collected. The opt-out control, which surfaced for users outside the European Union in recent months and drew renewed attention in May 2026, lets members stop future data from entering AI pipelines but leaves everything previously ingested in place. For a platform where more than one billion professionals share resumes, salary details, and career advice, the stakes of that distinction are not abstract.
What LinkedIn changed and when
The new switch lives inside the “Data privacy” section of LinkedIn’s settings menu, under a subsection labeled “Data for Generative AI Improvement.” It is turned on by default. Members who never visit that page are automatically enrolled, and LinkedIn has confirmed that AI training on member content began before most users knew the toggle existed.
A Washington Post report first detailed the setting in September 2024, and LinkedIn’s own updated privacy policy broadened the language around how “content” and “activity” on the platform can be used to improve generative AI features. The company did not publish a standalone announcement with a specific effective date, which left many members learning about the change from news coverage rather than from LinkedIn itself.
LinkedIn said it notified users through emails, text messages, and in-app banners. But the rollout was uneven. Some members reported seeing only a brief banner that vanished after a single dismissal, with no follow-up. Others said they received no notification at all before the setting was activated on their accounts. LinkedIn has not disclosed how many users actually opened or engaged with those alerts.
What the toggle does and does not control
Turning the setting off stops LinkedIn from using new posts, comments, and profile updates for AI model training going forward. It does not reach backward. LinkedIn’s updated privacy FAQ states that opting out “does not affect training that has already taken place,” a position rooted in how large language models work: once data is woven into a model’s parameters during training, extracting specific contributions would require retraining from scratch.
The company has not specified exactly which AI models benefit from member data. LinkedIn has been rolling out AI-powered features steadily, including writing assistants for job seekers, automated recruiter tools, and post-summarization features. Microsoft, which owns LinkedIn, also operates a sprawling AI ecosystem anchored by Copilot. LinkedIn has not drawn a clear public line between data used solely for its own products and data that might support Microsoft’s broader AI development. The two companies share infrastructure and research resources, making that boundary difficult for outsiders to verify.
The definition of eligible data is similarly vague. Public profile text, posts, and comments are obvious candidates. But LinkedIn’s privacy policy language is broad enough to encompass private messages, job application materials, and recruiter correspondence. The company has not explicitly excluded those categories, leaving members who use LinkedIn’s messaging tools for sensitive negotiations or performance discussions without a firm answer.
Who is covered and who is carved out
The default opt-in does not apply to members in the European Economic Area, the United Kingdom, or Switzerland. LinkedIn excluded those regions because of the General Data Protection Regulation and the UK’s Data Protection Act 2018, both of which impose stricter requirements around consent and purpose limitation for personal data processing. The UK’s Information Commissioner’s Office had previously engaged with LinkedIn on its AI data practices, and the regulator has signaled ongoing interest in how platforms repurpose user content for model training.
For users in the United States, Canada, Australia, and most other markets, no equivalent federal framework forces an opt-in model. The California Privacy Rights Act gives California residents some rights to limit the use of personal information for automated decision-making, and a handful of other state laws include similar provisions. But none specifically address the scenario of a platform retroactively applying new AI-training purposes to data collected under earlier terms. The Federal Trade Commission has taken an increasingly aggressive posture on commercial surveillance and data-use changes, including a proposed rule on the subject, but enforcement actions specific to LinkedIn’s toggle have not materialized.
The geographic split means that two LinkedIn members posting identical content receive fundamentally different privacy protections based solely on where they live. A product manager in Berlin has stronger default safeguards than a product manager in Chicago, even though both use the same platform and share the same types of professional information.
Why this matters for a professional network
LinkedIn is not a casual social feed. Members treat it as a living resume, a job-search engine, and a professional reputation tool. The content people share there, including employment histories, skill endorsements, workplace opinions, and sometimes salary expectations, carries a different weight than a vacation photo on Instagram. When that material becomes training data for generative AI, it can surface in unpredictable ways: paraphrased in AI-generated summaries, reflected in recruiter tool outputs, or absorbed into models whose downstream applications LinkedIn has not fully described.
Meta faced a similar backlash when it began using Instagram and Facebook data for AI training and also carved out European users. The parallel illustrates an industry-wide pattern, but the comparison has limits. LinkedIn’s user base shares information with professional consequences in mind, and the expectation of how that data will be used is shaped by the platform’s positioning as a career tool, not an entertainment network.
Open questions remain about what happens to data after a member deletes their account. LinkedIn has not said whether content from deactivated profiles will eventually age out of training datasets or remain embedded indefinitely in models that power future features. The company also has not indicated whether it plans to offer more granular controls, such as separate toggles for public posts versus sensitive fields like contact details or demographic information.
What members can do before LinkedIn revises its controls again
The most immediate step is to visit the toggle and turn it off. Open LinkedIn’s settings, navigate to “Data privacy,” find “Data for Generative AI Improvement,” and switch it to the off position. That will not erase what has already been used, but it draws a line against future collection for this purpose.
Beyond the toggle, members can audit what they share publicly. Limiting sensitive details in open profile fields, being selective about what goes into posts and comments, and periodically rechecking the setting are all practical steps. For users who rely on LinkedIn messaging for confidential conversations, the lack of clarity about whether those exchanges are in scope is reason enough to move sensitive discussions to a different channel.
Until stronger privacy legislation catches up to the pace of AI development in most jurisdictions, the burden of protecting professional data on LinkedIn falls largely on the people who created it.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.