Liability for AI who is liable for AI errors

Content

Practice at COREDO confirms: the question “who is responsible for AI errors” is no longer an academic discussion. This is a daily management task related to liability for AI, compliance, contracts and insurance, which determines the cost of capital, time-to-market and strategic resilience.

In this article I have assembled a practical framework to help owners and directors turn the legal risks of AI deployment into manageable metrics. The text reflects both the legal perspective (liability under the AI Act, GDPR, consumer law) and engineering and operational aspects (MLOps, explainability, audit trail), because legal liability for AI always rests on evidence of due diligence and real control over the technologies.

Why is the board of directors responsible for AI?

Illustration for the section 'Why is the board of directors responsible for AI?' in the article 'Liability for AI – who is responsible for AI errors'

Executives are responsible not only for profit but also for corporate accountability for AI decisions. When algorithms are involved in lending, underwriting, KYC or advertising, the question “who bears the losses from AI errors” becomes a matter of brand survival. Civil liability for AI failures, reputational damage and regulatory sanctions converge here.

Our experience at COREDO has shown that even “moderate” incidents, such as erroneous AI recommendations in sales, lead to costly process reworks and the revision of contractual obligations. Add to that the jurisdictional issues in cross-border AI errors, and you’ll understand why companies with operations in Europe, Asia and the Middle East are building a unified accountability architecture for autonomous systems and their suppliers.

Regulatory framework of Europe, Asia and the CIS

Illustration for the section ‘Regulatory framework of Europe, Asia and the CIS’ in the article ‘Liability for AI – who is responsible for AI errors’
In the EU the AI Act has been adopted, which establishes a risk-oriented approach and introduces specific roles of responsible persons for high-risk systems (EU AI Act requirements for responsible persons). AI regulation in the EU is closely linked to the GDPR and liability for automated decisions, including the right to an explanation and administrative rights of data subjects. AI regulators in Europe rely on coordination with the EDPB and ENISA, and national agencies issue sectoral guides and create regulatory sandboxes for AI.

In Asia the regulatory landscape is fragmented, but requirements for algorithmic transparency, bias control and data security are being strengthened across the board. Countries where the COREDO team is actively working, for example, Singapore, promote soft-regulation models with strict standards for privacy by design and audits. In the CIS we see a move toward harmonization with international ISO standards and the OECD AI Principles and UNESCO recommendations on AI ethics.

Cross-border activities affect international law and cross-border liability. It is important here to consider notification regimes for risky systems, registration and the specifics of regulating deepfakes and platform liability, especially if your service distributes user-generated content and generative media.

Strict liability vs negligence: manufacturer and supplier liability

Illustration for the section «Strict liability vs negligence: manufacturer and supplier liability» in the article «Liability for AI – who is responsible for AI errors»
Lawyers are familiar with two main constructs: strict liability vs liability for negligence in AI. Under strict (product) liability for model defects the question is the existence of a defect and causation; under negligence – the proof of a breach of the standard of due care. In the European approach product liability for model defects and the legal foundations of strict product liability can affect both the AI manufacturer and the integrator if the defect arose as a result of modification or incorrect integration.

Liability of model providers and the responsibility frameworks for platforms as service providers become more acute when open source models are used. licensing terms of open source models and the legal assessment of open AI APIs and third‑party integrations require careful certification of the supply chain: provenance control, model cards, datasheets for datasets and security audits of code and model provenance analysis.

Business rights in the case of a defective AI model include claims for compensation, replacement and remediation; vendor model quality guarantees and contractual warranties should be combined with clear limitations of liability (limitation of liability). In consumer scenarios the risks increase: consumer rights and AI errors drive collective lawsuits and class-action risks, especially in cases of discrimination or widespread service failures.

Risks in contracts: indemnity and SLA/SLO

Illustration for the section «Risks in contracts: indemnity and SLA/SLO» in the article «Liability for AI – who is responsible for AI errors»
The solution developed at COREDO always begins with mapping risks to contractual AI risk management mechanisms. Contractual unloading of AI liability requires multi-level clauses: indemnification for IP infringements and privacy breaches, clauses on non-use of data for re-training, warranties of compliance with standards and security, clear limitation of liability with carve-outs for intent and gross negligence.

  • Indemnities and clauses in contracts with AI vendors set out coverage for claims related to bias, security, leaks and defects. It is important to determine who is responsible for harm caused by AI to the client when the model operates as part of a complex solution.
  • Model SLAs and SLOs for business applications define target levels of accuracy, latency, availability and data quality metrics. Vendor due diligence and security SLAs include requirements for encryption, access management, logging and incident response times.
  • How to allocate responsibility between the customer and the AI vendor? Through a matrix of “who manages data/training/deployment/monitoring” and tying risks to control domains. For generative models add risk management practices when using generative AI: content filters, watermarking, a deepfake policy and human-in-the-loop for sensitive decisions.
  • Best practices contract templates for procuring AI solutions include provisions on regulatory changes (change-in-law), obligations to maintain an audit trail, provide evidence packages and cooperate during audits.
In real negotiations the COREDO team pushed to include risk indicators and KRIs for AI projects directly in SLA appendices. This approach links legal metrics with operational ones, easing management and escalation.

How to embed control into engineering

Illustration for the section «How to embed control into engineering» in the article «Liability for AI – who is responsible for AI errors»
Compliance and Due Diligence for AI‑providers starts with assessing the provider against AI standards and certification (ISO/IEC 23894, ISO/IEC 27001 and national standards), as well as GDPR compliance. Regulatory requirements for model audits, algorithm audits and proof of due diligence require documentation across the whole chain: from data to deployment.

COREDO’s practice confirms: legal risk decreases when technical processes are transparent. To this end we implement:

  • Algorithmic transparency and explainability: model cards, datasheets for datasets, explainability metrics (SHAP, LIME, counterfactuals) and interpretability and model debugging tools.
  • Model version control and provenance: immutable artifact registries, role‑based access and model change audit, strict tagging policies for data and features.
  • Decision logging and audit trail for AI plus forensic logging for investigating causes of errors; this is the basis for defence in disputes and for regulatory reporting.
  • Algorithmic bias and fairness metrics, regular robustness testing and adversarial testing, as well as red teaming and stress testing of models.
  • Model drift control and performance monitoring, KRI and SLO, external validation and model benchmarking, peer review of models and independent technical audit.
  • MLOps practices for controlled risk and comparison of DevOps vs MLOps for model stability: reproducibility pipelines, data control, pre-release testing.
  • Data quality control tools and data validation, data quality control during cross-border transfer and data governance.
  • Compliance by design and documenting AI decisions, privacy by design and privacy impact assessment, as well as algorithmic impact assessment (AIA) for high-risk systems.
Such “operational legal practice” simplifies regulatory sandboxes for AI and the registration/notification to regulators of risky systems, and also helps meet regulatory requirements for explaining decisions in lending and for AML reporting.

Where AI errors in AML/KYC are particularly costly

In payment and credit services, the question “who is responsible for erroneous algorithmic decisions in finance” is resolved at the intersection of banking supervision, the AI Act and the GDPR. regulatory requirements to explain lending decisions force operators to demonstrate explainability, traceability and the absence of discrimination.

Liability for AI errors in AML and KYC systems also concerns errors such as false positives/false negatives. Managing incidents of false positives and false negatives requires human oversight and human-in-the-loop, clear escalation and logging playbooks. AML automation, errors and regulatory liability entail fines and enforcement orders if the operator cannot demonstrate due diligence and the adequacy of algorithms.

The COREDO team implemented compliance controls for advertising recommendations and manipulations for clients to prevent behavioral discrimination and violations of consumer protection standards. In financial products we recommend using deterministic vs. stochastic risk models complementarily: deterministic models for hard rules and thresholds, stochastic models for improving ranking with mandatory explainability.

Insurance and preparedness for claims

Who is responsible for harm caused to a client by AI is often determined by how prepared the company is for an incident. An AI incident response playbook should include model shutdown scenarios, fallbacks to manual procedures, regulator notification, and customer communications. Forensic logging and complete decision logs reduce the cost of investigations and accelerate resolution.

AI risk insurance: another pillar. In practice we structure coverage through:

  • Insurance products: cyber for data breaches and security incidents; professional indemnity and tech E&O for professional liability, software defects, and service failures.
  • selection criteria for AI insurance coverage: geography of risk, type of solutions (generative/classification), data volumes, presence of human-in-the-loop, incident history, regulatory requirements.
  • Pricing of insurance premiums for AI risks depends on MLOps maturity, logging quality, external audits, and available certifications.
How to prepare a company for lawsuits arising from AI? You need methods for calculating financial risk and reserves for claims, resource planning for AI-related legal disputes, and pre-established models for compensating victims and schemes for damage reimbursement. Legal precedents and liability cases related to AI are already forming, and their analysis improves the quality of your contracts and internal policies.

The role of the board of directors: strategy

Responsibility of boards of directors for AI strategies includes corporate oversight: the role of the board of directors and the committees on risk, IT and compliance. Management of ethical risks and ethics‑by‑design, corporate policy on the use of generative AI and requirements for staff training and competency certificates shape the culture and “tone from the top”.

The economics of AI scaling aggravate the consequences of model defects: systemic risk from widespread use of homogeneous models can lead to simultaneous failures for many clients. Model resilience metrics when scaling, management of technical debt and the risk of accumulation during model development, as well as external validation and benchmarking become strategic KPIs.

Methodologies for assessing the ROI of AI deployment that take risks into account include the cost of an AI error (direct, indirect and reputational damage), compliance costs, insurance and reserves. In practice COREDO links ROI with KRI and control costs so that investment committees make balanced decisions.

How COREDO allocates and retains risk

  • EU, Licensing of a payment institution. The client implemented scoring using AI. We built explainability based on SHAP and counterfactuals, conducted a privacy impact assessment and an algorithmic impact assessment (AIA), and prepared model cards and datasheets for datasets. We contractually established indemnification for discrimination and limited the client’s liability provided compliance with SLA/SLO and human‑in‑the‑loop procedures. The regulator approved the model within a regulatory sandbox, and the subsequent registration and notifications to regulators about high‑risk systems were completed without comments.
  • Singapore, fintech provider AML/KYC. The system produced a high level of false positives. The COREDO team implemented incident management for false positives and false negatives, strengthened drift monitoring and adversarial testing. We documented a vendor warranty on model quality and quick version downgrade procedures in the contracts. Result — reduced operational costs and confirmation of compliance with the national agency’s requirements.
  • Dubai, recommendation and advertising platform. The task was to control compliance of advertising recommendations and manipulations and to regulate deepfakes. Our solution included watermarking, a content policy, and clauses on the provider’s right to disable generative content in case of compliance risks. This allowed the platform to avoid consumer claims and ensure the right to an explanation during moderation.
  • United Kingdom, HR automation using open‑source models. We conducted a legal review of open‑source model license terms and third‑party integrations, implemented fairness metrics and an independent peer review. We contractually established a division of responsibility between the client and the AI vendor, including warranties and limitation of liability, as well as a due diligence checklist for AI vendors with requirements for audit trails and data governance.

Due diligence checklist: implementation steps

Чтобы минимизировать юридический риск ИИ и ускорить интеграцию, рекомендую последовательность, которую команда COREDO отточила на разных рынках:

  1. Risk classification and regulatory pathway
    • Identify the risk category under the AI Act and the relevant guidance from regulators (EDPB, ENISA, national agencies).
    • Check the need for registration/notification and participation in regulatory sandboxes.
  2. Data and IP
    • Conduct data mapping, manage third‑party rights in training data, protect IP, and assess trade‑secret disclosure risks.
    • Limit cross‑border transfers, implement privacy‑by‑design and DPIA, and control the vendor’s data usage terms.
  3. Model and engineering
    • Implement MLOps: versioning, KRI, drift monitoring, robustness tests and adversarial testing, red teaming, interpretability.
    • Prepare model cards, datasheets, audit trail, forensic logging, access control tools and role‑based access.
  4. People and processes
    • Implement human‑in‑the‑loop where decisions affect the rights of data subjects.
    • Train staff, introduce competency certificates and an incident playbook.
  5. Contracts and insurance
    • Set up indemnification, warranties, limitation of liability, SLA/SLO and change‑in‑law clauses.
    • Select insurance products (cyber, professional indemnity, tech E&O) and calculate premiums taking control maturity into account.
  6. Reporting and audit
    • Prepare requirements for test documentation and reporting to regulators.
    • Appoint regular peer reviews and independent technical audits, and arrange external validation and benchmarking.
  7. Disputes and reserves
    • Assess compensation models for affected parties, methodologies for calculating financial risk, and reserves for claims.
    • Plan resources for legal disputes and a communications strategy.

COREDO accelerates and safeguards innovation

Our experience at COREDO has shown: businesses need a partner who combines licensing, international registration and AI compliance into a single roadmap. For companies entering the EU, the Czech Republic, Slovakia, Cyprus, Estonia, the United Kingdom, Singapore and Dubai, we build infrastructure that withstands audits and scaling.

  • Registration and licensing. We support licenses for payments, forex and crypto services, taking into account best practices for implementing AI in financial services and local regulators’ expectations.
  • Contractual architecture. We develop legal mechanisms for allocating risk across the AI ecosystem, including best practices contract templates, indemnities, warranties and SLA/SLO.
  • Technical compliance. We implement compliance by design: audit trail, explainability, data governance, AIA/DPIA, provenance control, regulatory monitoring tools and compliance automation.
  • Insurance and financial planning. We set up insurance coverage and help assess the cost of an AI error, systemic risk and ROI taking control measures into account.
  • Corporate oversight. We help boards of directors build a generative AI policy, ethical standards and training programs, including the role of committees and model resilience KPIs.
  • Regulatory engagement. We support projects in sandboxes, arrange registrations and notifications, and prepare reporting and communications with regulators.
As a result, the company receives not just ‘documents’, but a managed operating system of accountability where legal, technical and business metrics work in concert.

Conclusions

responsibility for errors of AI: it is not a show-stopper, but a manageable factor. When you have a clear allocation of roles between the AI manufacturer, the model provider, the integrator and the client, when contracts cover key risks, and the engineering environment provides explainability, an audit trail and resilience, you reduce the likelihood of disputes and accelerate innovation.

COREDO helps build such systems in real, cross-border conditions: from the EU to Singapore and Dubai. I am convinced: companies that are putting AI due diligence in place today will benefit tomorrow in cost of capital, customer trust and speed to market for new products. If you plan to implement AI in critical processes, obtain a financial license or enter a new market, lay down an accountability architecture now. It’s an investment that protects the business and opens up room for growth.
LEAVE AN APPLICATION AND GET
A CONSULTATION

    By contacting us you agree to your details being used for the purposes of processing your application in accordance with our Privacy policy.