Nikita Veremeev
16.09.2025 | 6 min read
Updated: 16.09.2025
In 2025, 82% of international companies deploying artificial intelligence faced the risk of regulatory sanctions in the EU – and only 14% of them were able to timely adapt their processes to the new AI regulation Europe requirements. These are not just numbers: this is a reality that is changing the technology development strategy of businesses across all industries, from financial services to logistics and retail. Why did such a high percentage of companies find themselves unprepared for the new standards? The reason is the unprecedented complexity and scale of the European Artificial Intelligence Act (EU AI Act), entering into force in 2025 and already affecting business processes, investment decisions and corporate governance not only in the EU, but also in Asia, the CIS and the Middle East.
Why did such a high percentage of companies find themselves unprepared for the new standards?
The reason is the unprecedented complexity and scale of the European Artificial Intelligence Act (EU AI Act), entering into force in 2025 and already affecting business processes, investment decisions and corporate governance not only in the EU, but also in Asia, the CIS and the Middle East.
As CEO of COREDO, I see daily how issues of AI compliance, implementation of the AI Act for international companies and cross-border AI regulation are becoming key for our clients. What should you do if your company operates AI systems in Europe but the head office is in Singapore or Dubai? What risks does non-compliance with the EU AI Act entail? How to prepare the business for an audit and avoid fines reaching tens of millions of euros? How to ensure algorithmic transparency and data governance so as not to lose the trust of customers and partners?
In this article I will analyze in detail the structure and logic of the EU AI Act, and show with examples from COREDO’s practice how international companies adapt their processes, what mistakes they make and which solutions work in practice.
If you want not only an overview of the law but also concrete tools to prepare your business,, I recommend reading the article to the end.
Here you will find strategic ideas, checklists and recommendations that will help not only to comply with the new requirements but also to use them for growth and strengthening your position on the global market.
EU AI Act: what businesses need to know

The EU AI Act is the world’s first comprehensive regulatory act governing artificial intelligence based on a risk-oriented categorization principle. The law applies to all companies offering or integrating AI systems on the territory of the EU, regardless of the place of registration or headquarters. The key objective is to ensure transparency, safety and accountability in the use of AI in business, to protect the rights and freedoms of users, and to create a unified standard of AI governance for Europe and the world.
EU AI Act — who does it apply to?
The EU AI Act covers not only European companies but all international suppliers and integrators whose AI solutions are available on the EU market. This means that even if your business is registered in Singapore, the United Kingdom or Dubai, if you offer AI services to European clients you fall under the law. The COREDO team has implemented projects for the implementation of the AI Act for international companies where the key challenge was integrating EU requirements into local processes and downstream integration with existing corporate systems.
Special attention is paid to interaction with EU AI regulators: companies are required to register high-risk AI systems, provide technical documentation and undergo audits for compliance with AI standards. For companies operating in multiple jurisdictions, it is critically important to ensure cross-border AI regulation and to synchronize processes between European and Asian offices.
Categories of AI systems – types and how they differ
The EU AI Act introduces a strict classification of AI systems by risk level:
- Prohibited AI practices: these include social scoring, manipulative algorithms, biometric identification without consent, and emotion recognition in public spaces. Such systems are completely banned from use and sale in the EU.
- High-risk AI systems: solutions that affect people’s rights, safety and health (for example, credit scoring, medical AI, HR algorithms). They require mandatory conformity assessment, registration and regular audits.
- General Purpose AI (GPAI), systems of general purpose such as large language models that can be integrated into various business processes. For GPAI models with systemic risk, separate requirements are introduced regarding transparency, publication of information about training data and management of systemic risks.
COREDO’s practice confirms: correct categorization of AI systems is the first step to reducing regulatory risks and successfully passing an audit.
When does the AI Act come into force?
The AI Act is implemented in stages:
- From 2 February 2025, bans on prohibited AI practices (social scoring, manipulative algorithms) come into force.
- From 2 August 2025, GPAI providers are required to publish information about training data, assess and disclose systemic risks.
- By December 2025, all high-risk systems must complete registration and enter the register before going to market.
- By mid-2026 full harmonization of requirements occurs: all provisions of the law become mandatory for businesses.
COREDO supports clients at every stage, ensuring timely preparation of technical documentation and interaction with notified bodies and the EU AI Office.
EU AI Act requirements for foreign companies

AI compliance redefined: it’s not just formal reporting but a comprehensive restructuring of processes, from Due Diligence AI to implementing technical standards and a culture of risk management.
Prohibited AI practices and risks
Prohibited AI practices are not only a legal but also a reputational risk. The use of social scoring, manipulative algorithms or biometric identification without the user’s explicit consent entails not only fines, b
and blocking access to the EU market. High-risk AI systems require undergoing a conformity assessment (conformity assessment), the implementation of serious incidents reporting mechanisms and regular auditing.
The solution developed at COREDO for one fintech client included the deployment of automated emotion recognition tools with mandatory registration and configuration of algorithmic transparency, which made it possible to pass audits by notified bodies and avoid fines.
Requirements for GPAI with systemic risk
GPAI models are the new focus of regulators. Providers are required to disclose information about training data, assess systemic risks and publish reports on downstream integration. For models with systemic risk (systemic risk GPAI models), additional requirements are introduced for transparency, data governance and the publication of information on copyright compliance.
Our experience at COREDO has shown that timely preparation of documentation and the implementation of internal procedures for assessing systemic risks not only ensure compliance, but also increase the trust of investors and partners.
Thus, proper preparation of all documentation facilitates passing audits and minimizes risks when interacting with supervisory authorities.
Documentation and reporting for audit
Technical documentation for the AI Act is not just formal reports, but a comprehensive set of documents including a description of the AI system architecture, algorithms, data sources, personal data processing procedures, risk assessments and response plans for serious incidents. Downstream providers are required to integrate their processes with the requirements of the primary supplier and ensure transparency at all stages.
COREDO has developed templates and checklists for preparing AI system audits, which include requirements of the Code of Practice, AI technical standards and best practices for incident management.
Thus, thorough documentation and integration of all processes not only contributes to successful audit completion but also helps minimize risks associated with non-compliance with the AI Act.
Fines for violating the AI Act
Fines for violating the AI Act are among the highest in the history of European regulation: up to €35 million or 7% of a company’s annual global turnover. In addition to financial losses, companies risk product blocking, license withdrawals and serious reputational consequences.
COREDO’s practice confirms: proactive management of legal and operational risks, implementation of liability mechanisms for harm caused by AI, and regular engagement with regulators are key to reducing risks and maintaining competitiveness.
Implementing business processes under the AI Act

Implementing the AI Act for international companies is a strategic project that requires a review of business processes, investment strategies and corporate governance.
How to determine an AI system’s category and its status
Compliance assessment of AI systems begins with a risk-oriented approach: it is necessary to conduct a systemic risk assessment, classify the system by risk level (prohibited, high-risk, GPAI, low-risk) and determine the requirements for documentation, audit and reporting.
The COREDO team has implemented projects to automate the categorization process, which has allowed clients to quickly adapt to new requirements and reduce audit costs.
Data management and cybersecurity: what matters?
The AI Act imposes strict requirements on data governance and transparency: companies must ensure data privacy, open data compliance, implement algorithmic transparency mechanisms and guarantee the cybersecurity of AI systems. The processing of personal data in AI must comply with GDPR standards and the new requirements for protecting user rights.
COREDO implements comprehensive AI cybersecurity solutions, including automated monitoring systems, incident response and regular security audits.
Impact on supply chains, investments and innovation
The AI Act changes the logic of AI solution supply chains: companies must control not only their systems but also the integration of third-party models (downstream integration), ensure copyright compliance and manage ethical AI risks. The AI Act’s impact on AI investment is reflected in increased due diligence and transparency requirements, which raise project costs but at the same time reduce the risk of long-term losses.
COREDO supports investment deals by assessing the profitability of AI implementation (AI ROI assessment) taking into account the new regulatory requirements.
Implementing practices in different jurisdictions: EU, Asia, Africa
Implementing the AI Act in Asian and African countries is associated with a number of challenges: differences in national standards, the absence of a unified audit infrastructure, and difficulties in downstream integration with European systems. Best practices include creating a unified compliance management platform, regular engagement with national supervisory bodies and implementing internal procedures to adapt business processes.
COREDO has implemented cross-border AI regulation projects where the key success factor was the integration of EU requirements with local regulatory standards.
Monitoring compliance with the AI Act and working with regulators

Effective engagement with regulators: the foundation for successful AI Act implementation and minimization of legal risks.
Role of the AI Office and national regulators in the EU
EU AI Office: the central body coordinating the implementation and enforcement of the AI Act. It is responsible for developing technical standards, publishing the Code of Practice and interacting with national AI supervisory authorities. The European Artificial Intelligence Board ensures harmonization of requirements between EU countries and the exchange of information on systemic risks.
COREDO supports clients in interactions with the EU AI Office, ensuring timely system registration and audit preparation.
Audit and inspections for supervisory authorities
Preparation for auditing AI systems includes AI due diligence, the collection and structuring of technical documentation, interaction with notified bodies and national supervisory authoritiesa. It is important not only to pass a formal inspection but also to build processes for regular monitoring and responding to serious incidents.
The solution developed by COREDO includes automation of audit preparation processes and integration with the AI Service Desk for prompt interaction with regulators.
Recommendations for international companies

The AI Act is both a challenge and an opportunity for international businesses. Companies that adapt their processes in time gain a competitive advantage and access to the largest market for AI solutions.
Checklist for compliance preparation:
- Categorize all AI systems by risk level
- Develop and implement data governance, transparency, and cybersecurity procedures
- Prepare technical documentation and reporting according to AI Act standards
- Organize regular audits and monitoring of systemic risks
- Ensure engagement with the EU AI Office and national supervisory authorities
- Implement downstream integration procedures for third-party models
- Assess the profitability of AI deployment taking new requirements into account
Tips to minimize risks:
- Use internal and external AI due diligence tools
- Invest in automating audit preparation processes
- Engage experts and consultants with experience implementing the AI Act in international companies
- Respond promptly to changes in technical standards and regulator requirements
COREDO’s practice shows: a comprehensive approach to AI compliance is the key to sustainable development and reducing legal and operational risks.
Key questions for entrepreneurs
What are the key risks to international business of non-compliance with the EU AI Act? Financial fines up to €35 million, product blocking, license revocations, reputational damage, and restricted access to the EU market.
How to determine whether my AI system is categorized as high-risk or prohibited?: Conduct a systemic risk assessment (systemic risk assessment), compare the system’s functionality with the list of prohibited and high-risk practices under the AI Act.
What steps are necessary to prepare a company for an AI Act compliance audit?
- System categorization, preparation of technical documentation, implementation of incident management procedures, engagement with notified bodies.
What are the documentation and reporting requirements for GPAI models? Publication of information about training data, systemic risk reports, algorithmic transparency, downstream integration.
How does the AI Act affect the strategy for implementing and scaling AI solutions in international companies?: It requires revising business processes and integrating new risk management, transparency, and accountability procedures, which affects investment attractiveness and the speed of scaling.
Useful applications and services
Useful applications and services help businesses and developers account for new requirements and leverage the opportunities that arise with the adoption of the EU AI Act. Understanding the key stages of this law coming into force will allow you to adapt processes and tools in advance to comply with the new rules and operate effectively in a regulated market.
Stages of the EU AI Act coming into force
Stage/Requirement |
Effective date |
Brief description |
Ban on prohibited practices |
2 February 2025 |
Social scoring, manipulative algorithms |
Transparency requirements for GPAI |
2 August 2025 |
Data publication, reporting, risk assessment |
Registration of high-risk systems |
December 2025 |
Entry into the register before market launch |
Full harmonization of requirements |
Mid 2026 |
All provisions of the law become mandatory |
Useful resources and templates for work
- Official European Commission guides on AI compliance
- Checklists for preparing technical documentation for the AI Act
- Sample reporting templates for GPAI and high-risk systems
- Support services for the AI Service Desk and contacts of notified bodies
COREDO remains your reliable partner in the world of new AI regulation standards in Europe, providing not only legal protection but also strategic support for growth and innovation.