Practice at COREDO confirms: the question “who is responsible for AI errors” is no longer an academic discussion. This is a daily management task related to liability for AI, compliance, contracts and insurance, which determines the cost of capital, time-to-market and strategic resilience.
Why is the board of directors responsible for AI?

Our experience at COREDO has shown that even “moderate” incidents, such as erroneous AI recommendations in sales, lead to costly process reworks and the revision of contractual obligations. Add to that the jurisdictional issues in cross-border AI errors, and you’ll understand why companies with operations in Europe, Asia and the Middle East are building a unified accountability architecture for autonomous systems and their suppliers.
Regulatory framework of Europe, Asia and the CIS

In the EU the AI Act has been adopted, which establishes a risk-oriented approach and introduces specific roles of responsible persons for high-risk systems (EU AI Act requirements for responsible persons). AI regulation in the EU is closely linked to the GDPR and liability for automated decisions, including the right to an explanation and administrative rights of data subjects. AI regulators in Europe rely on coordination with the EDPB and ENISA, and national agencies issue sectoral guides and create regulatory sandboxes for AI.
In Asia the regulatory landscape is fragmented, but requirements for algorithmic transparency, bias control and data security are being strengthened across the board. Countries where the COREDO team is actively working, for example, Singapore, promote soft-regulation models with strict standards for privacy by design and audits. In the CIS we see a move toward harmonization with international ISO standards and the OECD AI Principles and UNESCO recommendations on AI ethics.
Strict liability vs negligence: manufacturer and supplier liability

Lawyers are familiar with two main constructs: strict liability vs liability for negligence in AI. Under strict (product) liability for model defects the question is the existence of a defect and causation; under negligence – the proof of a breach of the standard of due care. In the European approach product liability for model defects and the legal foundations of strict product liability can affect both the AI manufacturer and the integrator if the defect arose as a result of modification or incorrect integration.
Liability of model providers and the responsibility frameworks for platforms as service providers become more acute when open source models are used. licensing terms of open source models and the legal assessment of open AI APIs and third‑party integrations require careful certification of the supply chain: provenance control, model cards, datasheets for datasets and security audits of code and model provenance analysis.
Risks in contracts: indemnity and SLA/SLO

The solution developed at COREDO always begins with mapping risks to contractual AI risk management mechanisms. Contractual unloading of AI liability requires multi-level clauses: indemnification for IP infringements and privacy breaches, clauses on non-use of data for re-training, warranties of compliance with standards and security, clear limitation of liability with carve-outs for intent and gross negligence.
- Indemnities and clauses in contracts with AI vendors set out coverage for claims related to bias, security, leaks and defects. It is important to determine who is responsible for harm caused by AI to the client when the model operates as part of a complex solution.
- Model SLAs and SLOs for business applications define target levels of accuracy, latency, availability and data quality metrics. Vendor due diligence and security SLAs include requirements for encryption, access management, logging and incident response times.
- How to allocate responsibility between the customer and the AI vendor? Through a matrix of “who manages data/training/deployment/monitoring” and tying risks to control domains. For generative models add risk management practices when using generative AI: content filters, watermarking, a deepfake policy and human-in-the-loop for sensitive decisions.
- Best practices contract templates for procuring AI solutions include provisions on regulatory changes (change-in-law), obligations to maintain an audit trail, provide evidence packages and cooperate during audits.
How to embed control into engineering

Compliance and Due Diligence for AI‑providers starts with assessing the provider against AI standards and certification (ISO/IEC 23894, ISO/IEC 27001 and national standards), as well as GDPR compliance. Regulatory requirements for model audits, algorithm audits and proof of due diligence require documentation across the whole chain: from data to deployment.
COREDO’s practice confirms: legal risk decreases when technical processes are transparent. To this end we implement:
- Algorithmic transparency and explainability: model cards, datasheets for datasets, explainability metrics (SHAP, LIME, counterfactuals) and interpretability and model debugging tools.
- Model version control and provenance: immutable artifact registries, role‑based access and model change audit, strict tagging policies for data and features.
- Decision logging and audit trail for AI plus forensic logging for investigating causes of errors; this is the basis for defence in disputes and for regulatory reporting.
- Algorithmic bias and fairness metrics, regular robustness testing and adversarial testing, as well as red teaming and stress testing of models.
- Model drift control and performance monitoring, KRI and SLO, external validation and model benchmarking, peer review of models and independent technical audit.
- MLOps practices for controlled risk and comparison of DevOps vs MLOps for model stability: reproducibility pipelines, data control, pre-release testing.
- Data quality control tools and data validation, data quality control during cross-border transfer and data governance.
- Compliance by design and documenting AI decisions, privacy by design and privacy impact assessment, as well as algorithmic impact assessment (AIA) for high-risk systems.
Where AI errors in AML/KYC are particularly costly
In payment and credit services, the question “who is responsible for erroneous algorithmic decisions in finance” is resolved at the intersection of banking supervision, the AI Act and the GDPR. regulatory requirements to explain lending decisions force operators to demonstrate explainability, traceability and the absence of discrimination.
Liability for AI errors in AML and KYC systems also concerns errors such as false positives/false negatives. Managing incidents of false positives and false negatives requires human oversight and human-in-the-loop, clear escalation and logging playbooks. AML automation, errors and regulatory liability entail fines and enforcement orders if the operator cannot demonstrate due diligence and the adequacy of algorithms.
Insurance and preparedness for claims
Who is responsible for harm caused to a client by AI is often determined by how prepared the company is for an incident. An AI incident response playbook should include model shutdown scenarios, fallbacks to manual procedures, regulator notification, and customer communications. Forensic logging and complete decision logs reduce the cost of investigations and accelerate resolution.
AI risk insurance: another pillar. In practice we structure coverage through:
- Insurance products: cyber for data breaches and security incidents; professional indemnity and tech E&O for professional liability, software defects, and service failures.
- selection criteria for AI insurance coverage: geography of risk, type of solutions (generative/classification), data volumes, presence of human-in-the-loop, incident history, regulatory requirements.
- Pricing of insurance premiums for AI risks depends on MLOps maturity, logging quality, external audits, and available certifications.
The role of the board of directors: strategy
The economics of AI scaling aggravate the consequences of model defects: systemic risk from widespread use of homogeneous models can lead to simultaneous failures for many clients. Model resilience metrics when scaling, management of technical debt and the risk of accumulation during model development, as well as external validation and benchmarking become strategic KPIs.
How COREDO allocates and retains risk
- EU, Licensing of a payment institution. The client implemented scoring using AI. We built explainability based on SHAP and counterfactuals, conducted a privacy impact assessment and an algorithmic impact assessment (AIA), and prepared model cards and datasheets for datasets. We contractually established indemnification for discrimination and limited the client’s liability provided compliance with SLA/SLO and human‑in‑the‑loop procedures. The regulator approved the model within a regulatory sandbox, and the subsequent registration and notifications to regulators about high‑risk systems were completed without comments.
- Singapore, fintech provider AML/KYC. The system produced a high level of false positives. The COREDO team implemented incident management for false positives and false negatives, strengthened drift monitoring and adversarial testing. We documented a vendor warranty on model quality and quick version downgrade procedures in the contracts. Result — reduced operational costs and confirmation of compliance with the national agency’s requirements.
- Dubai, recommendation and advertising platform. The task was to control compliance of advertising recommendations and manipulations and to regulate deepfakes. Our solution included watermarking, a content policy, and clauses on the provider’s right to disable generative content in case of compliance risks. This allowed the platform to avoid consumer claims and ensure the right to an explanation during moderation.
- United Kingdom, HR automation using open‑source models. We conducted a legal review of open‑source model license terms and third‑party integrations, implemented fairness metrics and an independent peer review. We contractually established a division of responsibility between the client and the AI vendor, including warranties and limitation of liability, as well as a due diligence checklist for AI vendors with requirements for audit trails and data governance.
Due diligence checklist: implementation steps
Чтобы минимизировать юридический риск ИИ и ускорить интеграцию, рекомендую последовательность, которую команда COREDO отточила на разных рынках:
- Risk classification and regulatory pathway
- Identify the risk category under the AI Act and the relevant guidance from regulators (EDPB, ENISA, national agencies).
- Check the need for registration/notification and participation in regulatory sandboxes.
- Data and IP
- Conduct data mapping, manage third‑party rights in training data, protect IP, and assess trade‑secret disclosure risks.
- Limit cross‑border transfers, implement privacy‑by‑design and DPIA, and control the vendor’s data usage terms.
- Model and engineering
- Implement MLOps: versioning, KRI, drift monitoring, robustness tests and adversarial testing, red teaming, interpretability.
- Prepare model cards, datasheets, audit trail, forensic logging, access control tools and role‑based access.
- People and processes
- Implement human‑in‑the‑loop where decisions affect the rights of data subjects.
- Train staff, introduce competency certificates and an incident playbook.
- Contracts and insurance
- Set up indemnification, warranties, limitation of liability, SLA/SLO and change‑in‑law clauses.
- Select insurance products (cyber, professional indemnity, tech E&O) and calculate premiums taking control maturity into account.
- Reporting and audit
- Prepare requirements for test documentation and reporting to regulators.
- Appoint regular peer reviews and independent technical audits, and arrange external validation and benchmarking.
- Disputes and reserves
- Assess compensation models for affected parties, methodologies for calculating financial risk, and reserves for claims.
- Plan resources for legal disputes and a communications strategy.
COREDO accelerates and safeguards innovation
Our experience at COREDO has shown: businesses need a partner who combines licensing, international registration and AI compliance into a single roadmap. For companies entering the EU, the Czech Republic, Slovakia, Cyprus, Estonia, the United Kingdom, Singapore and Dubai, we build infrastructure that withstands audits and scaling.
- Registration and licensing. We support licenses for payments, forex and crypto services, taking into account best practices for implementing AI in financial services and local regulators’ expectations.
- Contractual architecture. We develop legal mechanisms for allocating risk across the AI ecosystem, including best practices contract templates, indemnities, warranties and SLA/SLO.
- Technical compliance. We implement compliance by design: audit trail, explainability, data governance, AIA/DPIA, provenance control, regulatory monitoring tools and compliance automation.
- Insurance and financial planning. We set up insurance coverage and help assess the cost of an AI error, systemic risk and ROI taking control measures into account.
- Corporate oversight. We help boards of directors build a generative AI policy, ethical standards and training programs, including the role of committees and model resilience KPIs.
- Regulatory engagement. We support projects in sandboxes, arrange registrations and notifications, and prepare reporting and communications with regulators.
Conclusions
responsibility for errors of AI: it is not a show-stopper, but a manageable factor. When you have a clear allocation of roles between the AI manufacturer, the model provider, the integrator and the client, when contracts cover key risks, and the engineering environment provides explainability, an audit trail and resilience, you reduce the likelihood of disputes and accelerate innovation.