The General-Purpose AI Code of Practice
Content of the article
In July 2025, the European Commission completed the development of the General-Purpose AI Code of Practice, the first voluntary document of its kind to set standards for the responsible use of general-purpose AI systems such as ChatGPT, Gemini, and Claude. This Code was a logical step towards the practical application of the provisions of the EU AI Act. The document contains specific recommendations on transparency, copyright compliance, and safety when working with high-risk AI models, and aims to help companies better navigate the new regulatory requirements.
The Code was prepared with the participation of over 1,400 experts, ranging from tech giants to representatives of the academic community and civil society. Serhii Barbashyn, managing partner of Barbashyn Law Firm, also joined the analysis and expert discussion process, sharing his experience at the intersection of law, ethics, and regulation of emerging technologies. His participation was part of the efforts of Ukrainian experts to shape a responsible international agenda in the field of artificial intelligence.
In this article, we look at the key provisions of the Code, the specifics of its development, and analyze its significance for the European and global legal landscape in the field of AI.
Code of Conduct on Artificial Intelligence in the EU: overview and significance
The General-Purpose AI Code of Practice is a voluntary document developed by the European Commission to support companies in complying with the requirements of the EU Artificial Intelligence Act (AI Act), which came into force in 2024 and is being implemented in stages. In particular, requirements for general-purpose systems (GPAI) will come into force on August 2, 2025.
The main purpose of the Code is to promote the responsible development and use of GPAI systems, such as large language models, for example, ChatGPT, Gemini, and Claude. The Code offers clear recommendations on transparency, security, and copyright compliance to ensure the ethical use of AI. It is not legally binding, but companies that adopt it will benefit from legal certainty and a simplified approach to complying with the AI Act. The Code also sets standards that can serve as a benchmark for global AI regulation.
The process of creating the Code
The development of the Code took a year and was completed on July 10, 2025, with the publication of the document. The process was coordinated by the European Artificial Intelligence Office (EU AI Office) and involved a wide range of stakeholders—about 1,400 participants, including representatives of the technology industry, academia, civil society, and small businesses. The work was carried out in four working groups led by leading experts in the field of AI.
Despite the procedural aspects of developing and adopting the Code and the need to take into account different positions, the Code is considered a significant achievement that demonstrates the EU’s commitment to an inclusive approach to AI governance.
Main provisions of the Code
The Code of Conduct for General Artificial Intelligence focuses on three key aspects to ensure the ethical and safe development of AI technologies:
- Data transparency
Companies that develop AI models are required to provide comprehensive information about the data used to train them. This includes a detailed description of data sources, collection methods, and processing. Particular attention is paid to reducing the risk of bias in models, which can lead to discriminatory results. The code requires the implementation of mechanisms to detect and filter unacceptable content, such as materials containing data obtained in violation of personal data protection laws (e.g., GDPR). To do this, companies must use automatic moderation and data analysis tools and document their approaches to ensure reproducibility and verification.
- Copyright compliance
The Code establishes clear requirements for respecting intellectual property. AI developers must implement policies that prevent the reproduction of copyrighted material without proper permission. This involves the use of technical measures such as content filtering algorithms, data origin verification systems, and mechanisms for identifying protected material. For example, if a model generates text, images, or music, it must not reproduce protected content without a license. The Code also calls for cooperation with rights holders to ensure transparency in the use of their materials for training models.
- Safety and security (for high-risk models)
This section applies to the most powerful AI models, which are classified as posing systemic risks, i.e., they may pose threats to public safety or economic stability. The code requires companies to conduct regular risk assessments, including testing models before they are released to the market and after significant updates. For example, models must be tested for their ability to generate dangerous content, such as instructions for creating biological weapons or disinformation that could affect democratic processes. If serious incidents are detected, companies are required to report them to the European Artificial Intelligence Office within 2–15 days, depending on the severity of the problem. The code also emphasizes the importance of cooperation with suppliers of systems that integrate these models to ensure safety at all stages of use.
Serhiy Barbasin’s participation was based on his expertise and that of our company in the areas of copyright, GDPR, and data protection in the IP and AI fields. The code also provides for the creation of standardized risk assessment methodologies that can be applied to different AI models. This includes developing criteria for identifying systemic risks and recommendations for minimizing them. For example, companies must implement monitoring systems that allow them to detect anomalies in model behavior in real time.
Conclusions
The Code of Conduct on General Artificial Intelligence is an important step in creating an ethical and safe framework for AI development in Europe. It not only helps companies comply with the requirements of the Artificial Intelligence Act, but also contributes to strengthening public trust in AI technologies.
Processes such as the development of this Code are critical to ensuring a balance between innovation and the protection of public values such as safety, transparency, and respect for human rights. They also demonstrate the importance of international cooperation and the involvement of various stakeholders in creating global standards for AI governance.
Currently, it is critical to develop, adapt, and implement AI processes in accordance with existing norms and principles. This should take place at both the technical and legal levels. Examples include the preparation of specialized policies, instructions, user agreements, etc.
We use cookies to improve the performance of the site and enhance your user experience.
More information can be found in our Privacy Notice







