X is reportedly developing a new transparency feature that would tag posts created or significantly altered using AI. The proposed ‘Made with AI’ label is intended to help users quickly identify synthetic or AI-manipulated media, including generated images, edited visuals, and potentially altered video, reported by independent researcher Nima Owji. Early indications suggest creators may be able to self-disclose AI use during the posting process, while the platform also explores automated detection tools to flag synthetic content.
Notably, the company previously introduced policies targeting ‘manipulated media‘ to label deceptive edits and misleading visuals. However, those rules were designed primarily for altered real-world footage. And now, the new labelling initiative shows a shift toward identifying fully synthetic content as well as AI-enhanced media, acknowledging that generative AI can create entirely fabricated scenes that never existed. By flagging such content, the platform aims to provide context without restricting creative use cases like satire, art, and visual experimentation.
The timing of this potential development is notable, as X has faced intensifying regulatory pressure and scrutiny following a series of controversies tied to AI-generated and manipulated content. Recently, the platform’s AI chatbot Grok drew backlash after users showed it could be prompted to generate explicit and non-consensual deepfake imagery, including fabricated likenesses of public figures, reigniting concerns about harassment, impersonation, and reputational harm enabled by generative AI.
In response to these controversies, regulators in regions like Europe and India have intensified scrutiny of how platforms address AI-generated intimate imagery and identity misuse, with X among the companies facing questions about moderation speed, detection systems, and user safeguards. In February 2026, authorities in Spain opened a criminal investigation into multiple platforms, including X, over the spread of AI-generated child sexual abuse material, underscoring the growing legal risks associated with synthetic media abuse.
Even earlier, researchers and safety groups also warned that synthetic political visuals and AI-edited clips were spreading widely across social platforms ahead of major elections in multiple countries, increasing pressure on technology companies to strengthen authenticity safeguards and improve transparency around manipulated media. Many major internet user hubs and key digital markets are now moving toward stricter oversight of synthetic media and platform accountability. For example, India’s strengthened 2026 IT regulations mandate clear labelling of AI-generated content and faster removal of harmful deepfakes.
The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →