
X head of product Nikita Bier announced a policy change requiring AI labels on AI-generated videos of armed conflicts for creators in the platform’s revenue sharing program.
The policy targets the authenticity of content during active conflicts, specifically addressing the rapid advancement of AI video generation quality. It applies only to monetized creators and focuses exclusively on armed conflict footage, excluding general AI content or non-monetized accounts.
First-time violators will be suspended from revenue sharing for 90 days, Bier stated. Repeat offenders will be permanently removed from the program.
Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program.
During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies,…
— Nikita Bier (@nikitabier) March 3, 2026
Violations will be flagged through Community Notes or by detecting metadata from generative AI tools. The platform already watermarks images and videos generated by its Grok chatbot but has not previously required user disclosure of AI-generated content.
Bier cited the need for authentic information “during times of war.” He noted the current U.S.-Israel-Iran conflict has not been formally or legally declared a war, and the U.S. has not formally declared war since 1942.
X is testing a broader AI labeling toggle to let users mark any post as containing synthetic content. Social Media Today first reported on the feature, though X has not shared a timeline for its release.