Your AI guide or how IT and startups can prepare for AI regulation
Content of the article
Content of the article
Artificial intelligence is no longer just a technological novelty – it has become a key element of modern business. In 2024, more than 72% of companies worldwide have already integrated AI solutions into at least one business function, and the global AI market exceeded $233 billion. In response to the rapid adoption of AI, regulators around the world – including the EU, the US, Canada, and the UK – are introducing new regulations aimed at ensuring the security, transparency, and ethical use of AI systems.
It is important to understand that Ukrainian companies developing AI systems are automatically subject to international regulations, even if their offices are not located in the EU. Given that a significant part of the Ukrainian IT sector is focused on Western markets, compliance with international regulation is critical to ensuring competitiveness and legal security.
Key regulatory initiatives: what’s on the horizon
EU. Adopted the AI Act, the world’s first comprehensive artificial intelligence legislation. The law categorizes all AI systems into four risk levels. For example, unacceptable risk includes systems that can seriously violate fundamental human rights, such as social scoring or manipulative use of emotions. Their use will be completely banned in the EU from February 2, 2025. Minimal risk covers tools such as spam filters or recommender systems – they are not subject to special regulation, but must comply with ethical and security principles.
The law also establishes separate requirements for general-purpose models (GPAI), which are large AI models that learn from data sets and can be used in various industries. To ensure the effective implementation of the AI Act, the European AI Office (AI Office) has been established to coordinate compliance at the level of member states.
Additionally, it is planned to launch AI Act Service Desk, a special support service for businesses. It should help small and medium-sized enterprises to adapt their products to the requirements of the new legislation. The European strategy also includes active stimulation of investments in innovations: the plan envisages the transformation of traditional industries with the help of AI and the involvement of
USA. In 2023, President Biden issued Executive Order 14110, which required companies to test their AI systems for security. But in 2025, the new President Trump canceled this document. Instead, he proposed to create a new AI development strategy with fewer restrictions. At the same time, some states, such as California, are developing their own laws. An AI action plan is being developed, but it is not yet known at what stage the document is at.
United Kingdom. The UK is working on the law on AI security and has already created a special institute to test such technologies. The government created a special AI Security Institute, which was later renamed the AI Protection Institute.
Canada. The country is developing its own law – AIDA (Artificial Intelligence and Data Act), which will include requirements for risk assessment in AI systems. Requirements for companies to explain how their AI systems work and what data they use, as well as requirements for AI ethics.
Ukraine. Currently, the country does not have a separate law on artificial intelligence, but it needs to harmonize its legislation with European standards, in particular with the EU AI Act. In this context, the Ministry of Digital Transformation has developed a “White Paper” on AI regulation, a document that forms the state’s vision of a legal approach to AI adapted to Ukrainian realities. In addition, the Ministry of Digital Transformation is actively promoting AI initiatives, including the creation of an LLM model in Ukrainian and the launch of a regulatory sandbox for testing innovative AI solutions in a controlled environment. These steps should not only harmonize Ukrainian regulations with the European ones but also ensure the development of local AI initiatives at the global level.

Does the regulation of the AI act in the EU concern you?
Artificial intelligence regulation in the EU, in particular the AI Act, has an extraterritorial effect. It applies not only to companies registered in the EU, but also to any organizations from other countries if they supply AI systems to the European market or if their use affects individuals within the EU. To determine whether your activities fall within the scope of the AI Act, you should consider the following questions:
Is AI part of your product or business processes?
If artificial intelligence is used in the design, implementation, or operation of your solutions, it may already be subject to regulation – especially if the result interacts with or influences decisions about users in the EU.
Your role in the AI chain
One of the key aspects of AI regulation is your role in the AI supply chain. The AI Act defines specific responsibilities for each type of participant:
- Providers are responsible for ensuring that AI systems comply with transparency, security, and risk management requirements.
- Deployers. These are those who apply AI in their business.
- Intermediaries (Distributors). They ensure reliable distribution of AI products within the EU.
- Authorized representatives. Authorized representatives are appointed to interact with regulators on behalf of the supplier.
- Importers. Importers are responsible for ensuring that products entering the EU market meet all AI Act requirements.
Understanding your role will help you define your responsibilities more precisely and avoid regulatory violations.
Do you use third-party AI solutions in your products or services?
Even if you don’t develop your own models but only integrate third-party tools – for example, via OpenAI’s API – you may be liable for their use. This is especially important if your product is entering the EU market: in this case, you need to make sure that integrated AI services meet transparency, security, and data protection requirements.
According to Gartner, by 2027, most companies using AI will have to comply with the new rules, even if they do not consider themselves AI companies. This is especially relevant for the Ukrainian IT sector: according to the IT Ukraine Association, in 2023, IT exports amounted to $6.7 billion, and a significant part of this amount is accounted for by AI solutions. If you work with the EU, US, or Canadian markets, these changes already affect you.
What you can already do: a checklist of actions
Preparing for AI regulation is not a burden but an opportunity to get ahead in a competitive market. For Ukrainian IT companies that export many of their products to the EU and the US, early action means saving resources and gaining customer trust. Here are five practical steps you can take today:
- Conduct an AI functionality mapping. Find out where your business uses AI: in products (e.g., chatbots, recommendation systems) or internal processes (analytics, automation). This will help you understand the scope of regulatory requirements.
- Assess risks by AI Act categories. Determine the risk level of your AI system. For example, high-risk systems require stricter compliance.
- Create an ethics policy and AI Governance framework. Develop internal rules for the use of AI that guarantee transparency and non-discrimination. A simple example is to specify how you avoid bias in algorithms.
- Document processes. Record data sources for the AI system, how models are trained, and how results are monitored. This is the basis for audits provided for by the AI Act and proof of your transparency.
Demonstrate ethical responsibility. Adherence to ethical principles in the AI field is not only a requirement of the times but also an element of trust. Focusing on international standards, such as OECD, UNESCO, or IEEE recommendations, contributes to the formation of transparent practices. Participation in professional initiatives, such as the AI Ethics Impact Group, is one of the ways to support
Mistakes to avoid
Waiting until 2026 is a wrong strategy. According to the European DIGITAL SME Alliance, adaptation to the AI Act for high-risk systems can take 12-18 months, and fines for non-compliance can reach €35 million or 7% of global turnover. Delays increase the risk of inspections and unforeseen expenses.
Shifting responsibility to third parties and ignoring documentation. Outsourcers or integrators often think that compliance is the customer’s problem. However, without clear contractual terms, the responsibility may return to the developer. According to ENISA’s Artificial Intelligence Cybersecurity Challenges, in 2024, 25% of cyberattacks in the EU were related to vulnerabilities in AI systems due to insufficient data control. If your AI module becomes the source of a problem, a client or regulator may hold you liable if contracts do not delineate roles.
Companies that do not declare the principles of responsible AI use risk losing trust. According to Edelman, 34% of European consumers are ready to abandon brands that use AI opaquely, and scandals with algorithmic bias in 2023-2024 caused a 12% drop in the market value of companies. Transparency and ethical policies are becoming critically important, especially in the EU market.
Documentation and policies: what should be updated now
The AI Act establishes different amounts of mandatory documentation depending on the risk level of the system. The higher the risk, the stricter the requirements for technical documentation, compliance assessment procedures, and internal policies.
Companies should reconsider:
- Technical documentation – it should reflect the purpose of using AI, data sources, training, testing, monitoring, and risk management mechanisms.
- Privacy Policy – it should clearly state which AI mechanisms process personal data and on what legal grounds.
- Terms of Use – it is necessary to specify the role of AI in the service operation, risks, and how responsibility is distributed between the parties.
- Internal AI compliance policies – it is advisable to implement unified standards of AI system management that cover ethical principles, procedures for model evaluation and updating.
According to the AI Multiple survey, 77% of companies have already prioritized AI compliance, and 69% have implemented basic ethical practices. Such actions not only facilitate future audits and demonstrate compliance, but also increase trust from customers, partners, and regulators.
Practical cases from the experience of Barbashyn Law Firm
Ми супроводжували низку українських компаній у процесі адаптації їхніх AI-продуктів до нових вимог регулювання. Нижче — два приклади кейсів, які демонструють різні підходи до відповідності AI Act залежно від ролі в ланцюгу постачання та типу AI-рішень.
Case 1: AI in HR and finance. A Ukrainian startup has integrated an AI module into its HR/fintech solution targeting the EU market. According to the AI Act, the company is classified as a provider, which imposes legal obligations on it to ensure that the AI solution meets legal requirements. In cooperation with Barbashyn Law Firm, the company implemented the following steps to ensure compliance
- classifying the system as high-risk;
- developing basic technical documentation and policies;
- updating the Terms of Use and Privacy Policy;
- introducing explanations of AI logic and transparency mechanisms.
This helped to identify legal and reputational threats, clearly define the obligations of the parties, and ensure compliance. Such steps reduced the risk of fines and increased business confidence.
Case 2: Integration of third-party AI services. A company that integrated a third-party AI service into its B2B analytics platform acted as a distributor in the supply chain. According to the AI Act, it is subject to the requirements for compliance, transparency, and control over the use of AI systems. Practical steps implemented by the startup:
- checking the AI supplier for compliance;
- inclusion of obligations in the contractual documentation;
- updating the internal integration policy;
- communication with users about the availability of AI functionality.
A company that does not develop AI on its own still bears legal responsibility for its use. Comprehensive documentation helps to avoid unforeseen issues during an audit or inspection.
Conclusions
Artificial intelligence regulation is no longer a matter of the future – it is already shaping the rules of the game in key markets such as the EU. For companies working with AI technologies, it is important not only to keep abreast of changes, but also to have a deep understanding of regulators’ expectations: both in terms of transparency and the distribution of responsibilities in the supply chain. Different jurisdictions form their own approaches, but the EU already has a clear structure of requirements through the AI Act. This means that if a company is seeking to enter the European market or already operates there, the first thing to do is to define the role in the chain, classify the AI system by risk level, and check the compliance of documentation.
The European example demonstrates that transparency, explainability, and responsibility are becoming mandatory elements, not additional options. Companies that already update internal policies, Terms of Use, Privacy Policy, and formulate declarative principles of ethical AI use gain a strategic advantage. Preparation for the AI Act is not just a formality but a tool for risk management, business sustainability, and professional reputation. It is time to act: systematize processes, implement clear rules, and move from reaction to management.
Published by AIN.UA
We use cookies to improve the performance of the site and enhance your user experience.
More information can be found in our Privacy Notice







