Microsoft has confirmed that a bug in its Office suite allowed Copilot AI to surface private email content from users’ Exchange Online accounts, even when data loss prevention policies were in place. The issue, tracked internally as CW1226324, began on Wednesday, January 21, 2026, and affects messages stored in Sent Items and Drafts folders. The disclosure arrives as European regulators are separately pressing Microsoft on how it manages generative AI risks, raising pointed questions about whether enterprise privacy controls can keep pace with AI integration.
How Copilot Bypassed Email Protections
The core of the problem is straightforward but alarming for any organization that handles sensitive communications. When users issued prompts to Copilot within Microsoft 365, the AI assistant could retrieve and display content from Exchange Online messages in Sent Items and Drafts folders. That behavior occurred despite the presence of data loss prevention (DLP) policies, the very safeguards enterprises configure to stop sensitive data from leaking outside approved channels. The incident was logged under ID CW1226324 in the Microsoft 365 admin center, and institutions such as the University of Pennsylvania have documented it in their own service alert listings for PennO365 customers.
DLP policies are supposed to act as guardrails. They scan outgoing messages for credit card numbers, health records, legal privileged content, and other regulated data, then block or flag those messages before they reach unintended recipients. When Copilot sidesteps those filters, it effectively creates a second, unmonitored exit point for the same information. A user querying Copilot about a project summary, for instance, could inadvertently receive fragments of a draft email containing contract terms, personnel details, or financial figures that DLP rules were specifically designed to contain. Because Copilot answers appear inside productivity apps rather than traditional mail clients, that leakage can be harder for compliance teams to detect using existing monitoring tools.
What the Admin Center Advisory Reveals
Details about CW1226324 became publicly visible through the information systems group that runs PennO365 at the University of Pennsylvania, which routinely republishes extracts from the Microsoft 365 admin center incident feed. In its summary, the advisory notes that users “may notice Copilot prompts returning information from Sent and Draft folder messages with DLP policies configured.” That language confirms two things: Microsoft was aware the bug existed, and the company acknowledged that DLP enforcement was the specific control being circumvented. For campus IT staff who relay Microsoft’s notices to a broader academic community, the phrasing underscored that the problem was not theoretical but observable in day‑to‑day Copilot use.
What the advisory does not include is equally telling. There is no public count of affected tenants or users, no detailed description of which Copilot experiences were impacted, and no explanation of the root cause. Microsoft has not issued a standalone press release or blog post addressing the incident, leaving organizations to infer the scope from terse admin center text. For IT administrators trying to assess their exposure, the most concrete detail is the incident ID itself, which they can look up in their own dashboards. Institutions like Penn’s main campus community rely on these forwarded notices to decide whether to suspend features, conduct mailbox audits, or issue internal privacy advisories, but in this case they must do so without clear guidance on how broadly data may have been exposed or whether certain configurations were more at risk than others.
EU Regulators Already Pressing on AI Risks
The timing of this bug disclosure is uncomfortable for Microsoft because European regulators are already scrutinizing how the company handles generative AI risks. Under the Digital Services Act (DSA), the European Commission can demand detailed information from very large online platforms about how they manage systemic risks. In late 2024, the Commission used those powers to require Microsoft to submit information about generative AI features in Bing, including Copilot, as described in a formal DSA information request published on the EU’s digital strategy portal. That request is legally binding, meaning Microsoft cannot simply decline or delay its response without risking enforcement action.
The DSA inquiry focuses on Bing rather than Microsoft 365 Copilot directly, but the regulatory logic applies to both products. The Commission’s accompanying communication highlights concerns about whether Microsoft has adequate systems to prevent generative AI from producing harmful or misleading outputs and from mishandling user data. A bug that lets Copilot bypass DLP protections in Exchange Online fits squarely within that risk category, even if it falls under a different product line. From the perspective of EU officials, represented by the broader executive branch of the Union, such incidents raise doubts about whether internal testing and safeguards are robust enough before AI tools are rolled out to millions of users. For compliance teams inside large enterprises operating in Europe, the CW1226324 incident is not just a technical glitch; it is a concrete example regulators can point to when arguing that AI deployments need stronger ex‑ante risk assessments and continuous oversight.
DLP as a False Floor for AI Privacy
Most coverage of AI privacy risks focuses on training data, asking whether user emails or documents are being fed into large language models to improve them. The CW1226324 bug exposes a different and arguably more immediate threat: retrieval. Copilot does not need to train on a user’s email to expose it. It only needs permission to search the mailbox and surface results in response to a prompt. DLP policies were built for a pre‑AI workflow where data moved through predictable channels like outbound email, file sharing, or printing. AI assistants introduce a new retrieval path that existing DLP rules were never designed to monitor, because the content never technically “leaves” the tenant in a way traditional tools recognize, even as it is recombined and displayed in novel contexts.
This gap suggests that enterprises treating DLP as a sufficient privacy control for AI‑enabled environments are operating under a flawed assumption. The bug did not require a sophisticated prompt injection or adversarial attack. It occurred during normal Copilot use, which means any licensed user with access to Exchange Online and Copilot could have triggered it without knowing they were pulling protected content. For organizations in regulated industries, that kind of silent data exposure can trigger breach notification obligations, even if no data left the corporate network, because the information was made available to an AI system outside the intended access scope. It also complicates internal investigations: logs may show only that a user asked Copilot a seemingly innocuous question, without clearly recording that sensitive draft material was summarized or quoted in the answer.
What Comes Next for Enterprise AI Trust
The broader question this incident raises is whether enterprise customers can trust AI assistants that sit on top of complex, legacy productivity stacks. Microsoft has repeatedly assured customers that Copilot respects existing permissions, sensitivity labels, and compliance policies, presenting the assistant as a neutral interface layered over mature governance controls. CW1226324 shows that this assurance is only as strong as the integration code that binds Copilot to services like Exchange Online. When that layer fails, long‑standing protections such as DLP can be undermined without any obvious change to tenant configuration. For risk officers and general counsels who signed off on Copilot deployments based on those assurances, the episode will likely prompt a reassessment of how much reliance they place on vendor statements versus independent testing.
In the near term, organizations have a limited menu of responses. Some will choose to restrict Copilot’s access to email data, limiting the assistant to less sensitive content like internal documentation or public‑facing files. Others may narrow Copilot licensing to roles that do not routinely handle regulated information, accepting a reduced productivity benefit in exchange for a smaller blast radius if similar bugs emerge. A more ambitious response would be to treat AI assistants as their own data channels, subject to bespoke monitoring, logging, and policy enforcement rather than simply inheriting whatever rules apply to email or document storage. Until Microsoft provides a detailed root cause analysis, confirms that the bug has been fully remediated, and demonstrates that similar flaws are unlikely to recur, CW1226324 will stand as a case study in how quickly AI features can erode the foundations of enterprise data governance, and how urgently customers and regulators alike are demanding stronger guarantees.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.