Commission opens consultation on draft guidelines for AI transparency obligations

From 2 August 2026, people in the European Union will have to be informed when they are interacting with artificial intelligence (AI) systems or exposed to certain AI-generated or manipulated content. Today, the European Commission published draft guidelines on these transparency obligations for stakeholder feedback, ahead of adoption.

Under the AI Act, AI providers will have to inform people when they are interacting with an AI system and add machine-readable marks to enable the detection of AI-generated or manipulated content. Deployers will also have to inform people when they are exposed to deep fakes, AI-generated publications on matters of public interests, and emotion recognition or biometric categorisation systems.

The draft guidelines take into account input from previous consultations and aim to clarify the scope of these obligations and help providers and deployers comply with them. A code of practice drafted by independent experts will complement the guidelines. The final code, expected in June 2026, will be a voluntary tool to help demonstrate compliance.

Stakeholders, including providers and developers of AI systems, businesses and public authorities, academia, research institutions and citizens, are invited to share their views by 3 June 2026.