The European Union has released a new ‘AI Code of Practice’, aimed at helping companies prepare for upcoming rules under the AI Act, which will take effect for general-purpose AI systems from August 2, 2025. This code is voluntary, but it gives AI developers and providers a structured way to show they are following the EUâs expectations. It focuses on three key areas – transparency, copyright compliance, and safety & security. The code was developed by the Commission in coordination with 13 industry experts, along with several researchers and civil society groups.
The code is mainly important for developers of general-purpose AI (GPAI) models. These include large AI systems that can be used across different tasks, like language models, image generators, and coding assistants. The first section of the code focuses on transparency. All GPAI developers (regardless of the size of their models) are expected to document how their models work and what they can and cannot do. This includes submitting a ‘Model Documentation Form’, which should contain details about the training methods, known limitations, typical use cases, and any risks involved in using the system.
The second part of the code covers copyright compliance. All GPAI providers must have a clear policy on how they use copyrighted materials when training their AI systems. If copyrighted content is used, companies need to make sure it was obtained legally or that they respected any opt-outs under the EUâs data mining rules. Also, they must provide a way for rights holders to file complaints and ask for their content to be removed.
The third (and final) part of the code deals with safety and security, but this section applies only to âsystemic GPAI modelsâ, which are generally large, powerful AI systems that meet a specific technical threshold based on computing power. For these models, developers are required to conduct risk assessments, put in place safety and cybersecurity measures, monitor their systems for misuse or failure, and report serious incidents. Even such companies must also be transparent about how they make safety decisions inside their organizations.
While the code is not legally binding, the Commission strongly encourages companies to follow it. Notably, the AI Act itself became law on August 1, 2024, but different parts of it take effect in stages. For GPAI models, the rules begin on August 2025, although existing models will have until August 2027 to comply.
However, despite the EU claims that the code is flexible and supports innovation and responsible AI use, some tech firms and industry groups argue that the documentation and safety requirements are complex and costly, particularly for smaller businesses. This becomes noteworthy as the Commission has also indicated that the code may be updated in the future based on feedback from industry and Member States.
The latest move also comes at a time when tech giants like Google are already facing intense scrutiny in the EU over their AI practices. Google is reportedly under investigation for its AI Overviews feature and alleged misuse of publisher content, as well as a separate probe into whether it followed GDPR rules when using user data to train its AI models. Another example is Meta, which faced a year-long delay due to regulatory concerns before rolling out its AI chatbot across Europe in March 2025. European regulators were particularly concerned about how Meta trains its AI models using user-generated content. Even the European version of the chatbot is said to be relatively restricted compared to its US counterpart.