X, the social media platform owned by Elon Musk, announced on Monday that paid creators who post AI-generated videos depicting armed conflict without proper disclosure labels will face suspension from its Creator Revenue Sharing program for 90 days, with repeat offenders permanently removed. The policy, revealed by X head of product Nikita Bier, invokes the concept of “times of war” and references U.S. congressional war powers as its rationale. The move arrives as regulators in Europe are already pressing X on how it handles generative AI content and deepfakes, raising questions about whether the platform is acting on principle or positioning itself ahead of potential enforcement actions.
What the New AI Labeling Rule Requires
The core of the policy is straightforward: creators enrolled in X’s revenue-sharing program must now label any AI-generated video that depicts armed conflict. A first violation triggers a 90-day suspension from Creator Revenue Sharing, as first detailed in a TechCrunch report, cutting off the financial incentive that drives much of the platform’s content ecosystem. For those who violate the rule again after reinstatement, the consequence escalates to permanent removal from the program, effectively ending their ability to monetize posts through X’s ad revenue split.
The penalty structure targets a specific pressure point. Rather than banning accounts or removing posts outright, X is hitting creators where it matters most: their income stream. This approach leaves the content itself largely untouched while stripping away the profit motive behind posting unlabeled synthetic war footage. Whether that distinction satisfies critics who want misleading AI content removed entirely, not just labeled, is a separate debate. But the financial penalty creates a clear deterrent for the subset of users who earn money on the platform and may discourage the most opportunistic use of AI-generated conflict imagery to chase engagement and payouts.
The “Times of War” Justification and Its Limits
X’s framing of the policy leans on an unusual reference point. To define what qualifies as “times of war” for the purposes of requiring AI labels on armed conflict videos, the platform pointed to a Senate explainer on formal declarations of war. That page outlines Congress’s constitutional authority to declare war, a power last exercised during World War II, and lists the limited set of conflicts that have received such declarations. The gap between these formal acts and the reality of modern military engagements, from drone campaigns to coalition interventions, makes this a slippery foundation for a social media content policy that purports to address today’s information environment.
The choice to anchor a moderation rule in congressional war powers language is notable for what it avoids. By tying the policy to an institutional definition rather than making editorial judgments about which conflicts count, X sidesteps the politically charged task of deciding whether events in Ukraine, Gaza, Sudan, or elsewhere meet its threshold. That ambiguity could prove convenient or problematic depending on how enforcement plays out. If the rule applies only when Congress has formally declared war, it would cover almost nothing in the modern era. If X interprets the phrase more broadly (perhaps treating any large-scale armed conflict as falling under the “times of war” rubric), the platform will need to explain where it draws the line, and that explanation has not yet arrived in public statements.
How X Plans to Enforce the Rule
Detection and enforcement will rely on two main mechanisms: Community Notes and metadata signals. According to an Engadget overview, Community Notes (X’s crowdsourced fact-checking system) will be used to flag posts that appear to show AI-generated conflict footage without disclosure, while technical cues embedded in media files may help identify synthetic content. Community Notes contributors can append clarifying context to posts and, under this framework, their findings could trigger revenue-sharing penalties for creators who fail to label AI-generated war imagery.
Neither tool is foolproof. Community Notes depends on enough informed users seeing and flagging a post before it spreads widely, and its consensus-driven model can be slow to respond to fast-moving viral clips. Metadata detection, meanwhile, works only when AI-generation tools embed identifiable markers and those markers survive editing, recompression, or re-uploading. Sophisticated actors, the very accounts most likely to weaponize synthetic war footage for propaganda or influence operations, are also the most likely to strip or obfuscate metadata before posting. The enforcement framework signals intent, but its practical effectiveness will hinge on operational details X has not fully disclosed, such as whether human moderators or additional automated classifiers will review high-risk content or repeat offenders.
EU Regulatory Pressure as a Backdrop
This policy did not emerge in a vacuum. The European Commission has already sent formal requests for information to X under the Digital Services Act (DSA), focusing on the platform’s risk assessments and mitigation measures related to generative AI, elections, illegal content, and fundamental rights. In one such request, described in a TechCrunch analysis, regulators pressed X on how it identifies and manages deepfake risks and other systemic harms tied to AI-generated media. The DSA empowers the Commission to impose fines of up to six percent of a platform’s global annual revenue for serious noncompliance, giving these inquiries real financial teeth.
Viewed through that lens, X’s new labeling requirement looks like more than a purely principled response to misinformation concerns. It functions as a visible, documentable policy that the company can point to when regulators ask what steps it has taken to address AI-driven risks, especially around sensitive topics like war and elections. The timing is notable: unveiling a specific enforcement mechanism for AI-generated armed conflict videos gives X a concrete example of “mitigation measures” at a moment when the Commission is scrutinizing such efforts. Whether this narrowly targeted rule is robust enough to satisfy DSA scrutiny is another matter, particularly if X cannot demonstrate consistent enforcement, transparent reporting, or meaningful reductions in the spread of misleading synthetic war content.
Financial Incentives, Speech Claims, and the Bigger Picture
The decision to police AI war footage through revenue-sharing penalties rather than content removal reflects a broader tension in how X presents itself. The platform has positioned its current iteration as a freer-speech alternative to competitors, and Musk has repeatedly framed expansive content moderation as censorship. By pulling creators out of the money pipeline while leaving their posts visible, X attempts to thread a needle: it punishes deceptive behavior without directly restricting expression. That framing aligns with its brand identity, but it also means that unlabeled AI war videos could still circulate widely and accumulate views even after the creator has been sanctioned. In practice, the content remains available to mislead; only the creator’s paycheck changes.
For advertisers and institutional partners, the policy offers a partial, and possibly fragile, layer of reassurance. Brands worried about appearing alongside graphic or misleading war footage might welcome any move that disincentivizes sensational AI content, yet the absence of systematic removal or robust pre-publication checks leaves open the risk that such material will trend before any penalty kicks in. The policy also applies only to creators in the revenue-sharing program, a subset of X’s user base; anonymous or non-monetized accounts can still post unlabeled synthetic conflict videos without facing the same financial consequences. That asymmetry underscores the limits of a monetization-focused approach to content integrity and highlights the broader question facing X and other platforms: whether tweaking incentives for paid creators is enough to address the societal risks posed by AI-generated war imagery, or merely a first, incomplete step taken under the shadow of looming regulation.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.