Have you adapted your business to the EU AI Code? Let's examine the details.
Content of the article
On July 10, 2025, the European Commission published the first Code of Practice for General-Purpose Artificial Intelligence (GPAI) models. It is voluntary in nature but will effectively become the main tool for demonstrating compliance with the requirements of Regulation (EU) 2024/1689 (AI Act).
This Code will determine how businesses in the EU will work with artificial intelligence technologies in the coming years. In the article below, we explain its key provisions, practical implications for companies, and steps that should be taken now.
Importantly, the article is accompanied by comments from Sergiy Barbashyn, one of the experts from the working group that participated in the development of the Code. This allows us to look at the document not only through the eyes of lawyers, but also through the prism of those who directly influenced its formation.
Why businesses should pay attention
The AI Act establishes mandatory requirements for AI models, but does not describe the mechanisms for their implementation. This creates uncertainty. Companies are left wondering what exactly they need to do to avoid fines and inspections.
The Code removes this uncertainty. It does not add new obligations, but offers tools for their practical implementation. For companies, this means that if you comply with the Code, your actions are recognized as proper enforcement of the law.
During consultations, we heard from many companies: “We are ready to comply with the rules, but we don’t want to develop our own policies from scratch.” That is why the Code contains documentation templates and recommendations that can be applied immediately. This saves time and resources.
Who’s covered by the new rules
The Code focuses primarily on GPAI model providers, i.e., those who develop and release general-purpose artificial intelligence systems, including generative ones. However, its scope is much broader.
The provisions of the Code also apply to companies that distribute or integrate such models in the EU. Even open-source communities are in the spotlight if their projects can create systemic risks.
A special category is systemic risk models. These are the most powerful systems capable of affecting security, human rights, or critical infrastructure. It is for them that the most stringent requirements are provided.
The discussion on open-source was one of the most difficult. Some experts insisted that any open-source model should be exempt from the rules. But the security argument prevailed. We agreed that small research projects do not require strict supervision, but powerful open-source systems must meet the same requirements as commercial ones.
Main sections of the Code
The document consists of three key areas: transparency, copyright, and security. Each of these directly affects how businesses will work with artificial intelligence in the coming years.
Transparency
Companies must keep records of their models: architecture, data sets used, energy consumption, and intended areas of application. These materials must be stored for at least ten years.
It is recommended that some of the data be made public in order to increase market confidence. At the same time, the company itself determines what is public information and what constitutes a trade secret.
This is what businesses were most concerned about. At meetings, we heard: “If we are forced to disclose the entire architecture, we will lose our competitive advantage.” Therefore, the final text includes a mechanism whereby a company can refuse to publish sensitive data but is required to provide it to the regulator upon request.
Copyright
A separate section of the Code is devoted to the legality of data use. Only materials that are legally accessible may be used to train models.
In addition, companies must take into account machine-readable signals from rights holders. These include technical instructions or digital markers that restrict the use of content: robots.txt, metadata, and other indicators. Ignoring them will be considered a violation of the Code.
It is also necessary to establish procedures for handling complaints and appoint responsible persons who can respond quickly to claims.
The discussion of the section on copyright was one of the most difficult, as it essentially concerned the boundary between intellectual property protection and AI development. Rights holders demanded clear guarantees that their content would not be used without control. Businesses, in turn, emphasized that excessive restrictions would make model training economically and technically unrealistic. As a result, we established two key principles. First, a company must have legal confirmation that the training materials are used legally (through a license, open agreement, or other legal mechanism). Second, if an author sets machine-readable signals that restrict the use of their content, the developer is obliged to take them into account.
Security and risk management
The most detailed section of the Code concerns systemic risk models. Additional obligations are established for them that go beyond transparency and copyright.
Companies are required to establish a risk management system. This is not a one-time document, but an ongoing process, including threat identification, analysis, and acceptance criteria. If the risks are found to be unacceptable, the development or release of the model must be suspended until the threats are eliminated.
In addition, modern technical and cyber security measures must be implemented. These include data cleansing and filtering, monitoring model performance, controlling user access, and gradual and controlled expansion of functionality.
Particular attention is paid to protecting against model weight leakage. If the parameters of the neural network are lost, control of the system passes to third parties, which can have critical consequences.
Before releasing a systemically risky model to the market, the supplier is required to prepare a Safety and Security Model Report. It describes the architecture, risks, security measures, and justifies their effectiveness. For the most powerful models, such reports must be updated at least once every six months.
In this section, we drew on practices from the financial sector, where risk management has long been mandatory. This is equally critical for AI, as a powerful model without risk control can become a threat to many users. From a legal standpoint, a company must demonstrate that it is capable of identifying unacceptable risks, documenting them, and, if necessary, suspending product development. This requires internal policies, the designation of responsible persons, and regular reporting. Without this, any incident can be considered a result of negligence, with corresponding legal and reputational consequences.
Practical challenges and steps for businesses
Compliance with the Code requires companies to make changes to their internal processes. Whereas previously the technical characteristics of models were often not systematically recorded, now each model must have its own “passport.” The use of data sets requires auditing and confirmation of their legitimacy. Protecting against model weight leakage becomes not only a technical but also a legal task, as uncontrolled use can lead to liability. Companies will also have to develop an organizational culture of compliance.
To adapt in time, businesses should start now:
- take inventory of models and identify those that fall under GPAI and systemic risks;
- verify the legitimacy of data sets;
- establish a process for creating and storing documentation;
- appoint those responsible for compliance and complaint handling;
- for powerful models, prepare a risk management system and security report templates.
Results
Work on the Code has shown that the EU is trying to combine security with the realities of business. That is why:
a transition period is provided for (until 2027 for existing models);
a year of “best efforts” has been introduced, during which incomplete implementation will not be penalized;
documentation templates have been added so that companies do not have to create everything from scratch.
As a lawyer, I consider this a key signal that the regulator is ready to work with business, not against it. But this period of “soft mode” will quickly pass. Therefore, those companies that start adapting now will win.
The GPAI Code is not just a set of recommendations, but a practical guide that helps companies comply with the AI Act, reduce legal risks, and demonstrate transparency to customers and partners. Businesses that start preparing now will not only be protected from possible sanctions, but will also gain a competitive advantage in the European market.
Published by AIN.UA
We use cookies to improve the performance of the site and enhance your user experience.
More information can be found in our Privacy Notice







