The EU AI Act and Private International Law: A First Look
Regulation 2024/1689 laying down harmonised rules on artificial intelligence, commonly known as the EU AI Act, has entered into force on 1 August 2024 and will progressively be applicable to several (private and public) organisations within transnational AI value chains connected to the EU internal market.
This Regulation is remarkable for two main reasons. First, it has a horizontal dimension covering in principle (all) hazardous AI systems and models. Second, it is of a binding legal nature going beyond classical AI ethical principles such as those developed by UNESCO, the OECD or the HLEG on AI.
As the AI Act is based on the New Legislative Framework (NLF) established for EU product safety policy, readers of the blog may wonder how it could have any private international law (PIL) aspect or even any impact on the field. Here are some first answers.
The AI Act in a Nutshell
Main Regulatory Blocks
The AI Act lays down three main regulatory blocks (see in details here). First, it provides for harmonised rules concerning the placing on the market, putting into service and use of AI systems in the Union. It includes, at a higher level of granularity, provisions prohibiting certain AI practices (such as social scoring or crime prediction under certain conditions) as well as specific requirements for high-risk systems and AI models. The second block consists of a dense public enforcement scheme covering, on the one hand, market surveillance rules to be implemented by national authorities. On the other hand, these rules are reinforced by a federal/EU-level governance framework – inspired by other regulatory instruments of the digital single market – and embodied by the AI Office. The third set of rules provides for measures to promote innovation, notably in the form of regulatory sandboxes, support measures and regulatory exemptions for SMEs and start-ups.
Conformity Regimes for AI Systems and Models
Under the AI Act, an AI system is “a machine-based system”, autonomous and adaptive, “[inferring] from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments” (AI Act, Article 3 point 1). Based on this broad definition, the Act provides for a taxonomy of AI systems and models, anchored in the “risk-based approach”. The more the AI system or model presents potential risks to the health, safety or fundamental rights of citizens, the more stringent the legislative requirements are.
The main regulated category in the Act is high-risk AI systems. It includes systems in the field of biometrics, critical infrastructure, law enforcement, education and employment, essential services, migration and the administration of justice. The regulation provides for numerous requirements relating to the safety and trustworthiness of AI (e.g. provisions on risk management, data governance, transparency, human oversight, etc.). These requirements and complemented obligations are mainly intended for AI providers (who develop and market the systems in the EU) and deployers (who use these systems in the course of their professional activity in the EU).
A lighter regulatory regime is established for general-purpose AI (GPAI) models presenting systematic risk – such as the Large Language Model GPT-4 used by the famous chatbot Chat-GPT –. It is surprising that GPAI models with high-impact capabilities fall under a less restrictive regulatory framework (than high risk AI systems), although they are expected to have “a significant impact on the Union market […] due to actual or reasonably foreseeable negative effects on […] fundamental rights, or the society as a whole” (AI Act, Article 3 point 65). They are already used for instance – once integrated into AI systems – in the judicial domain, notably for the development of legal tech services for lawyers, of robot-judges or, at least, as a tool to support court proceedings (as recently illustrated in a Dutch decision).
Private International Law Issues
Despite the absence of reference to PIL rules or instruments in the AI Act, including to regulate the interplay between the Act and EU regulations on PIL, there are some important points of contact between the two fields. They may be identified both at the implementation and enforcement stages of the AI Act’s regulatory framework.
PIL Issues in the AI Act Implementation
A. The Global Reach of the AI Act
First, the AI Act has a broad geographical scope of application. It replicates the broad understanding and legal treatment of transnational supply chains for products followed by EU law on product safety. Since the vast majority of the AI industry is established outside the Union, the AI Act must ensure a cross-border level playing field among all AI players and the protection of EU values including the safeguard of fundamental rights for European citizens. Article 2, 1 of the AI Act provides for the geographical delineation of the regulatory framework and consists of a rule of applicability, in the same vein as Article 2, 1 of the DSA or Article 3 of the GDPR. Organisations are subject to the EU regulation even when they are established outside the Union, as soon as the AI system has an impact on individuals in the Union. More precisely, even when both the provider and the deployer of an AI system are established in a third country, but “the output produced by the AI system is used in the Union”, the regulation is applicable. The difficulties here will be the predictability as well as the practicability of this broad delimitation, especially for providers. The latter should anticipate the jurisdictions – here the EU – in which their AI systems may be deployed but also in which the outputs of the system’s deployment may be used.
From a PIL perspective, this provision of the AI Act is a strong expression of EU unilateralism. The Union intends to impose its regulatory leadership in the field of AI technologies internationally to ensure the protection of its citizens. It gives the AI Act an international mandatory nature and this could have further conflict of laws implication at the Act’s enforcement stage.
B. AI Systems in the Field of (Cross-Border) Justice and Dispute Resolution
Second, among the various AI systems covered by the AI Act, those used in the administration of justice are considered high-risk. One specific use-case deals with AI assisting a judicial authority in different tasks: researching and interpretating facts and the law; and applying the law to a concrete set of facts. This applies to the Judiciary and the “judicial function” (i.e. juris dictio stage), beyond mere PIL issues. However, two aspects concerning PIL in particular can be highlighted.
On the one hand, the (above-mentioned) functions of judicial assistance particularly reflect PIL reasoning which, by its very nature, encompasses the entire dispute resolution process through an international focal point (i.e. determination of the competent jurisdiction, of the applicable law and of the foreign law’s content). It is good news that AI systems that may be used in the future to resolve PIL issues by courts are qualified as high-risk. PIL is a complex field of law indeed!
On the other hand, the said use-case is extended to arbitration, which therefore includes international arbitration. The use of AI has developed considerably in the context of alternative dispute resolution. Those involved in the ecosystem of international commercial or investment arbitration therefore need to be extremely cautious. The AI Act applied both to the provider and the deployer – it means, for the latter, the arbitrator – of an AI system.
PIL Issues in the AI Act Enforcement
During the legislative process, the draft AI Act was severely criticized by civil society representatives for not establishing a private enforcement scheme (see here and here). Given the serious risks to fundamental rights posed by AI, how can affected citizens obtain compensation in case of harm? This question obviously concerns PIL rules since the AI systems used in the EU are marketed, for the most part, by non-European operators. In addition, numerous AI systems are digital in nature and used via a computer interface. Thus, here again, the legal relationships will often be of international nature.
The AI Act at least provides for a complaint mechanism, including for individuals, before the market supervisory authority concerned in the event of a breach of the provisions of the Regulation. Moreover, in case of deployment of automated or support decision-making systems, qualified high risk, end-users have a right of information and a right to explanation of the role of the AI system in the decision-making procedure. However, there is no legal basis for plaintiffs to access to the courts. Plus, these international substantive rules may require the support of private international law to clarify their implementation in a particular EU member State. In parallel to the AI Act, EU law has developed a specific framework for civil liability for defective products, recently modernised and a proposal for a directive introducing a special liability regime for AI is under discussion in the European legislative arena. However, private international law aspects are not directly considered by these texts.
In this highly complex and dense context, legal practitioners will have to learn thinking in terms of cross-border civil justice in the AI ecosystem. The latter is not necessarily equivalent to the more general digital ecosystem, as AI is a multifaceted technology.

