Migliorini and Ilhão Moreira on the EU AI Act and Arbitration

, , ,

This post was written by Sara Migliorini and João Ilhão Moreira, who both teach at the Faculty of Law of the University of Macau. It builds on article they co-authored, titled Clashing Frameworks: the EU AI Act and Arbitration, just published on the European Journal of Risk Regulation.


The EU AI Act (hereinafter, the “Act” or the “AI Act”) is now in force, and arbitration is firmly in its sights. The Act classifies certain uses of AI in arbitration as ‘high risk’, triggering a demanding set of obligations for providers and deployers alike: arbitral institutions, individual arbitrators, and specialised legal tech companies.

In doing so, the Act directly affects cornerstone principles of arbitration, such as party autonomy, procedural flexibility, and confidentiality. As we argue in our recent article, this marks a sharp departure from the EU’s long-standing hands-off approach to commercial arbitration.

With the Act’s high‑risk provisions set to take effect within the next 15 months, we think it is timely to assess their impact and take steps to mitigate potential negative effects, including a targeted carve‑out for commercial arbitration.

The Traditional EU Approach to Arbitration

Over the years, EU law and arbitration have had a relationship that, with the exception of consumer arbitration, was largely one of mutual indifference. Despite available legislative avenues, the EU’s competence to regulate arbitration has been significantly underutilised. For example, although Article 81 TFEU empowers the EU to adopt harmonisation measures for the development of ADR, arbitration has been systematically excluded from measures based on Article 81. This approach appears to be substantially maintained in the latest document by the Commission regarding the review the Brussels I bis Regulation.  The CJEU has traditionally interpreted these exclusions broadly, confirming the EU’s restraint in regulating arbitration (e.g.London Steam-Ship).

On occasion, the EU legislator has utilised other legal bases to regulate arbitration, notably in the area of consumer protection (e.g., the Consumer ADR Directive, or the Digital Service Act). Yet commercial arbitration, as a whole, has remained largely untouched by direct EU legislation.Consequently, before the AI Act, the relationship between EU law and commercial arbitration was mainly limited to issues of substantive law arising out of court proceedings related to arbitrations and awards. The classic concern was the risk that arbitrators might commit errors of law and the fact that, unlike in litigation, there are limited avenues to correct such errors. In such contexts, the CJEU has addressed matters of arbitrability (e.g., Mostaza Claro), public policy (e.g., Eco Swiss), and generally the interaction with the judicial procedures for the annulment, recognition, and enforcement of arbitral awards (e.g., London Steam-Ship).

By contrast, EU law has abstained from regulating procedural issues of commercial arbitration. Traditionally, such aspects are regulated by party autonomy (often through institutional rules), supplemented by national arbitration laws and rules produced by the arbitral community itself.

The EU’s legislative and judicial abstention has sustained the doctrinal view that the EU’s legal order and commercial arbitration would evolve in parallel, with minimal interference between them. This arrangement was welcomed by a community that has historically self-regulated.

High-Risk Classification under the Act

Things have changed with the AI Act. Under the combined reading of Article 6, Annex III(8)(a), and Recital (53), AI systems are classified as high risk where they are intended to (1) assist arbitrators in researching or interpreting facts and the law, (2) apply the law to a specific set of facts, unless they fall within a closed series of use cases where the impact on fundamental rights and decision-making is not substantial.

Such use cases are (a) “narrow procedural tasks”, such as transforming unstructured data or classifying documents; (b) tasks that are adjunct to human effort, such as enhancing the language of documents; and (c) preparatory tasks, such as indexing and data processing activities.

This definition raises questions of interpretation, especially while we wait for the European Commission to issue clarifying guidelines (due in February 2026).

In our view, there are clear-cut cases. For example, AI used for ancillary tasks such as proofreading or language enhancement falls outside the high‑risk category, as these functions merely refine prior human work. By contrast, use of AI for any core decision‑making task is unequivocally high risk (and very likely in tension with existing principles of arbitration).

However, between these extremes lies a vast grey area: tasks like legal research, drafting, summarizing or reviewing submissions can still shape how arbitrators interpret facts and the law. Given this, the Act’s broad language might cause uncertainty for potential addressees of these rules.

Personal Scope and Obligations

Where a system is high risk, two groups of actors assume compliance obligations: those who develop or commercialise the systems (“providers”) and those who use such systems under their own authority (“deployers”).

Arbitral institutions, individual arbitrators, and specialised legal tech companies will be subject to obligations in both capacities.

The demanding obligations for providers are listed in Sections 2 and 3 of the Act and include: risk management (Article 9), data quality/governance (Article 10), effective human oversight (Article 14), accuracy, robustness, and cybersecurity (Article 15), registration of the system, demonstration of conformity on request, and accessibility compliance (Article 16).

The relatively less demanding obligations for deployers are listed in Article 26 and include: keeping logs for at least six months, organising oversight implementation, and ensuring suitable input data, among others.

Arbitral Institutions and Specialised Legal Tech Companies

Many tools currently marketed to support arbitration, from legal research platforms to document analysis systems, will likely be qualified as ‘high-risk’, because they are, indeed, intended to be used to research facts and the law. Even more general legal tech widely used in law firms, such as Harvey or Robin AI, may qualify as high risk when used by arbitrators. In all these cases, the costs and regulatory uncertainty may prove particularly burdensome for smaller providers and institutions.

Individual Arbitrators

Arbitrators using AI tools face two scenarios. Firstly, an arbitrator who relies on an AI system specifically designed for arbitration will be treated as a “deployer” of a high‑risk system and subject to the relevant obligations.

Secondly, general-pourpose tools not specifically designed for arbitration and not considered AI risk, such as ChatGPT, may still fall into the high‑risk category if used for legal reasoning or applying the law to facts. A literal reading of Article 25 of the Act could even mean that arbitrators who use systems such as ChatGPT, may be requalified and fall within provider‑level obligations, which are more demanding than provider-level obbligations.

Difficulties in Enforcement

Notwithstanding all the uncertainty surrounding its interpretation, non‑compliance with the Act exposes actors to serious risks. Fines are up to 15 million Euros or 3% of global turnover, whichever is higher. For individual arbitrators, small firms, or smaller institutions, such fines may be unsustainable, discouraging innovation and consolidating the market around large providers.

Arbitrators also face reputational and, possibly, civil liability risks, with unresolved questions as to how misuse of AI might affect the validity of an award.

Enforcement is complicated by the confidentiality of arbitration. Under current confidentiality practices, it may be difficult to determine whether AI was used, let alone whether its use complied with the Act. For example, reliance on AI for legal research may never appear in the procedural record. How enforcement might affect the duties of arbitrators, notably with respect to confidentiality, remains also to be seen.

Impact Outside of the EU

The Act’s extraterritorial scope adds further complexity. Under Article 2, the Act applies not only to EU‑based providers and deployers, but also to those outside the EU when (a) the output of an AI system is used within the EU, or (b) an affected party is located in the EU.

EU‑based companies, institutions, and arbitrators fall directly under the Act. However, depending on how “use of an output” in the EU is interpreted, it can be argued that the Act extends to non‑EU arbitral institutions and arbitrators who, although physically outside the EU, conduct arbitral proceedings that are legally seated in the EU.  The expression “use of an output”  may even cover enforcement of awards before EU courts or cases involving a party located in the EU, irrespective of any other link with the EU.

For non‑EU actors in particular, this creates significant uncertainty.

Our Proposal

 We belive that the best option would be for the EU Commission to completely exclude commercial arbitration from the scope of the Act. This could be done via a delegated act under Article 7(3), which would exclude commercial arbitraiton from Annex III(8)(a), while keeping consumer ADR within the Act’s scope.

An alternative option would be for the EU Commission to adopt guidelines under Article 6(5) of the Act, clarifying which uses of AI in arbitration are not high risk, following consultation with arbitral institutions and practitioners.

4 replies
  1. ADRIAN BRIGGS
    ADRIAN BRIGGS says:

    Given Prestige, and now this, why would anyone agree to commercial arbitration on the territory of a Member State ?

    • Sara Migliorini
      Sara Migliorini says:

      Yes. Unfortunately, it seems that the European institutions are not interested in making arbitration workable within the European Union.

  2. Marion Ho-Dac
    Marion Ho-Dac says:

    Many thanks Sara and João for sharing your excellent research! I believe that a more nuanced picture is also possible. First, with regard to individual arbitrators, they will most often be deployers (rather than providers), implying a ‘lighter’ regime (incl. in terms of enforcement), and a change of ‘status’ (from deployer to provider) based on article 25 (1) (c) seems unlikely in practice (but it may worth testing in courts for OpenAI and co). Second, the EU AI Act only impacts arbitration as a use case related to the administration of justice when there is a high risk of negative impact on fundamental rights. AI providers can use article 6, §3 if such a risk does not exist for their AI system (as you explain perfectly). In that case, the full high-risk AI regime – including for deployers – does not apply. Arbitrators are (still) human beings and must be aware of the growing capabilities of AI. The AI Act certainly plays a crucial role in this regard. If you use generative AI on a regular basis, you may already have noticed that it has a strong impact on our ‘human agency’. In the field of justice, some safeguards will certainly prove useful. What do you think?

  3. Sara Migliorini
    Sara Migliorini says:

    Thank you, Marion, for your interest and comments. We fully agree with you that there is a common sense reading of the AI Act and we hope it prevails.
    Still, Arts. 6(3) and 25(1)(c) can be read more strictly and in ways that fit poorly with ADR, especially commercial arbitration. Even if hard to enforce in practice, we feel the uncertainty is more than the arbitration ecosystem can bear.
    AI‑style rules may be suitable for some parts of ADR. But for arbitration, particularly commercial, they are probably not. Legislators have traditionally deferred to the arbitration community for equally delicate issues that affect rights (such as conflict of interest).
    That is why we suggest the Commission use the options available before the end of 2026 to carve out arbitration (at least commercial) from the high risk provisions, which would open the application of Art. 95 and the option of a community-led code of practice for safeguards, or to clearly list use cases excluded from the high‑risk category.

Comments are closed.

Discover more from EAPIL

Subscribe now to keep reading and get access to the full archive.

Continue reading