This post was contributed by Alexia Pato, who is Postdoc Research Fellow at the University of McGill (Montreal, Canada).
The present post provides an overview of the legal initiatives on artificial intelligence (AI) recently launched at the EU level and the questions they generate from a private international law (PIL) perspective.
The analysis starts with the 2021 Proposal for a Regulation on harmonised rules on AI and continues with the EU Parliament’s detailed recommendations for drawing up a Regulation on liability for the operation of AI systems.
Overview of the Proposed AI Regulation
On 21 April 2021, the EU Commission published its much-awaited Proposal for a Regulation laying down harmonised rules on AI, following explicit requests from the Council and the Parliament (see, in particular, the AI-related resolutions of the Parliament of October 2020 on ethics, civil liability and intellectual property). The proposed Regulation’s goal is to promote the free movement of AI-related goods and services, while ensuring the protection of fundamental rights.
If enacted, the Regulation would create a horizontal regulatory framework for the development, placement on the market and use of AI systems in the Union, depending on the risks that those systems generate for people’s health and safety or fundamental rights. In particular, Article 5 forbids AI practices which create an unacceptable risk (some exceptions may nevertheless apply). The prohibition extends to AI systems deploying subliminal techniques beyond a person’s consciousness to induce a particular behaviour and to those exploiting the vulnerabilities of a group of people (e.g., a doll integrated with a voice assistant that encourages children to play dangerous games in order to maximise their fun).
Real-time remote biometric identification (e.g., facial recognition) and social scoring are deemed to create an unacceptable risk as well. As regards high-risk AI systems (Title III), they must undergo an ex ante conformity assessment in order to be placed on the EU market (Articles 19 and Title III, Chapter 5).
The proposed Regulation imposes a series of requirements in relation to data, documentation and recording keeping, transparency and information to users, human oversight, robustness, accuracy and security (Articles 8 to 15). Examples of high-risk AI systems include medical assistants (e.g., IBM’s Watson assistant), chatbots and automated recruitment applications. Lastly, AI systems which create a low or minimal risk are permitted.
For a general assessment of the Proposal, see the CEPS Think Thank with Lucilla Sioli (DG CONNECT) available here, as well as the Ars Boni podcast available here.
The Extraterritorial Reach of EU law
As Article 2 of the proposed AI Regulation would confer the Regulation an extraterritorial reach, PIL questions emerge. In particular, the EU rules on AI are meant to apply to (1) providers placing AI systems on the EU market or putting them into service there, irrespective of their place of establishment; (2) users located in the EU; (3) providers and users located in a third state, when the output produced by the AI system is used – but not marketed – in the EU.
Remarkably, Article 2 bypasses the traditional choice of law methodology and unilaterally delineates the Regulation’s territorial scope of application.
This legislative technique has been used on other occasions: the most recent example is perhaps Article 3 of the General Data Protection Regulation (GDPR). Literature on the latter provision shows that the extraterritorial application of laws creates a fertile ground for overlaps and high compliance costs. The same observation could apply to AI if other states chose to exercise their (legislative) jurisdiction extraterritorially. How private (or public) international law will tackle that concern remains to be seen.
Moreover, interpretative issues are likely to arise, as the wording of Article 2 is vague. In particular, when is a user “located” in the EU – does the temporary presence on the territory trigger the application of the Regulation? What is the “output” of an AI system? And finally, when is an AI system “placed on the EU market” or “put into service” there?
The Law Applicable to Civil Liability
It is acknowledged that the misuse of AI systems may be harmful, despite the great potential of technologies to significantly improve our lives in many sectors. Traffic accidents involving either autonomous – i.e. driverless – or driver-assist vehicles are a telling example in that regard.
Currently, the law applicable to civil liability in such a scenario essentially depends on the actors involved – the driver, the manufacturer of the car, the designer of the software, etc. Several PIL systems applying different connecting factors might come into play, namely the Rome II Regulation, the 1971 Hague Convention on the Law Applicable to Traffic Accidents and the 1973 Hague Convention on the Law Applicable to Products Liability. Considering the fact that national civil liability regimes vary (sometimes significantly) from one state to another, the outcome of a case might be different depending on the court seized.
For a thorough PIL analysis, see T. Kadner Graziano, “Cross-Border Traffic Accidents in the EU – The Potential Impact of Driverless Cars” (Study for the JURI Committee, 2016), available here.
The EU Commission announced that a new piece of legislation addressing civil liability should soon complement the proposed AI Regulation, following the EU Parliament’s detailed recommendations for drawing up a regulation on liability for the operation of AI systems. If followed by the Commission and adopted, the text would partially harmonise national laws on civil liability in the EU. These shall however not be replaced; only adjustments would be provided.
The object of the future Regulation is to hold the operators of high-risk AI systems strictly liable, while operators of other AI systems would be subject to a fault-based liability regime. Finally, the drafting of the future Regulation should go hand in hand with the necessary review of the Product Liability Directive in order to build up a consistent liability framework in the EU.
According to Article 2 of the Parliament’s Draft Proposal, the liability rules enacted at the EU level would apply “on the territory of the Union where a physical or virtual activity, device or process driven by an AI system has caused harm or damage to the life, health, physical integrity of a natural person, to the property of a natural or legal person or has caused significant immaterial harm resulting in a verifiable economic loss”.
I find the wording of this provision unclear: shall the future Regulation apply where a court of a Member State is seized with a dispute involving damages caused by AI systems (as the terms “on the territory of the Union” suggests) or must the damage, the operator, the activity or the victim additionally be located in the EU?
Additionally, even though the future Regulation bypasses the Rome II Regulation according to Article 27 of the latter, traditional choice of law rules would still be needed to designate the law applicable to questions falling out of the future Regulation’s scope (such as the law applicable to multiple liability where non-operators are involved, just to mention one example). Fragmentation would therefore not be completely avoided.
For an analysis of the Draft Proposal from a PIL perspective, see J. von Hein, “Liability for Artificial Intelligence in Private International Law” (online presentation, 25 June 2020), available here.
Conclusion
The interaction of AI with the PIL field brings interesting research questions on the table for legal scholars. As things currently stand, however, the EU’s legislative initiatives do not overcome the sempiternal difficulties experienced in PIL, namely the fragmented application of laws, and the difficulty to manage interactions between multiple legal texts because of their overlapping and extraterritorial effect.
Like this:
Like Loading...
This post was contributed by Alexia Pato, who is Postdoc Research Fellow at the University of McGill (Montreal, Canada).
The analysis starts with the 2021 Proposal for a Regulation on harmonised rules on AI and continues with the EU Parliament’s detailed recommendations for drawing up a Regulation on liability for the operation of AI systems.
Overview of the Proposed AI Regulation
On 21 April 2021, the EU Commission published its much-awaited Proposal for a Regulation laying down harmonised rules on AI, following explicit requests from the Council and the Parliament (see, in particular, the AI-related resolutions of the Parliament of October 2020 on ethics, civil liability and intellectual property). The proposed Regulation’s goal is to promote the free movement of AI-related goods and services, while ensuring the protection of fundamental rights.
If enacted, the Regulation would create a horizontal regulatory framework for the development, placement on the market and use of AI systems in the Union, depending on the risks that those systems generate for people’s health and safety or fundamental rights. In particular, Article 5 forbids AI practices which create an unacceptable risk (some exceptions may nevertheless apply). The prohibition extends to AI systems deploying subliminal techniques beyond a person’s consciousness to induce a particular behaviour and to those exploiting the vulnerabilities of a group of people (e.g., a doll integrated with a voice assistant that encourages children to play dangerous games in order to maximise their fun).
Real-time remote biometric identification (e.g., facial recognition) and social scoring are deemed to create an unacceptable risk as well. As regards high-risk AI systems (Title III), they must undergo an ex ante conformity assessment in order to be placed on the EU market (Articles 19 and Title III, Chapter 5).
The proposed Regulation imposes a series of requirements in relation to data, documentation and recording keeping, transparency and information to users, human oversight, robustness, accuracy and security (Articles 8 to 15). Examples of high-risk AI systems include medical assistants (e.g., IBM’s Watson assistant), chatbots and automated recruitment applications. Lastly, AI systems which create a low or minimal risk are permitted.
For a general assessment of the Proposal, see the CEPS Think Thank with Lucilla Sioli (DG CONNECT) available here, as well as the Ars Boni podcast available here.
The Extraterritorial Reach of EU law
As Article 2 of the proposed AI Regulation would confer the Regulation an extraterritorial reach, PIL questions emerge. In particular, the EU rules on AI are meant to apply to (1) providers placing AI systems on the EU market or putting them into service there, irrespective of their place of establishment; (2) users located in the EU; (3) providers and users located in a third state, when the output produced by the AI system is used – but not marketed – in the EU.
Remarkably, Article 2 bypasses the traditional choice of law methodology and unilaterally delineates the Regulation’s territorial scope of application.
This legislative technique has been used on other occasions: the most recent example is perhaps Article 3 of the General Data Protection Regulation (GDPR). Literature on the latter provision shows that the extraterritorial application of laws creates a fertile ground for overlaps and high compliance costs. The same observation could apply to AI if other states chose to exercise their (legislative) jurisdiction extraterritorially. How private (or public) international law will tackle that concern remains to be seen.
Moreover, interpretative issues are likely to arise, as the wording of Article 2 is vague. In particular, when is a user “located” in the EU – does the temporary presence on the territory trigger the application of the Regulation? What is the “output” of an AI system? And finally, when is an AI system “placed on the EU market” or “put into service” there?
The Law Applicable to Civil Liability
It is acknowledged that the misuse of AI systems may be harmful, despite the great potential of technologies to significantly improve our lives in many sectors. Traffic accidents involving either autonomous – i.e. driverless – or driver-assist vehicles are a telling example in that regard.
Currently, the law applicable to civil liability in such a scenario essentially depends on the actors involved – the driver, the manufacturer of the car, the designer of the software, etc. Several PIL systems applying different connecting factors might come into play, namely the Rome II Regulation, the 1971 Hague Convention on the Law Applicable to Traffic Accidents and the 1973 Hague Convention on the Law Applicable to Products Liability. Considering the fact that national civil liability regimes vary (sometimes significantly) from one state to another, the outcome of a case might be different depending on the court seized.
For a thorough PIL analysis, see T. Kadner Graziano, “Cross-Border Traffic Accidents in the EU – The Potential Impact of Driverless Cars” (Study for the JURI Committee, 2016), available here.
The EU Commission announced that a new piece of legislation addressing civil liability should soon complement the proposed AI Regulation, following the EU Parliament’s detailed recommendations for drawing up a regulation on liability for the operation of AI systems. If followed by the Commission and adopted, the text would partially harmonise national laws on civil liability in the EU. These shall however not be replaced; only adjustments would be provided.
The object of the future Regulation is to hold the operators of high-risk AI systems strictly liable, while operators of other AI systems would be subject to a fault-based liability regime. Finally, the drafting of the future Regulation should go hand in hand with the necessary review of the Product Liability Directive in order to build up a consistent liability framework in the EU.
According to Article 2 of the Parliament’s Draft Proposal, the liability rules enacted at the EU level would apply “on the territory of the Union where a physical or virtual activity, device or process driven by an AI system has caused harm or damage to the life, health, physical integrity of a natural person, to the property of a natural or legal person or has caused significant immaterial harm resulting in a verifiable economic loss”.
I find the wording of this provision unclear: shall the future Regulation apply where a court of a Member State is seized with a dispute involving damages caused by AI systems (as the terms “on the territory of the Union” suggests) or must the damage, the operator, the activity or the victim additionally be located in the EU?
Additionally, even though the future Regulation bypasses the Rome II Regulation according to Article 27 of the latter, traditional choice of law rules would still be needed to designate the law applicable to questions falling out of the future Regulation’s scope (such as the law applicable to multiple liability where non-operators are involved, just to mention one example). Fragmentation would therefore not be completely avoided.
For an analysis of the Draft Proposal from a PIL perspective, see J. von Hein, “Liability for Artificial Intelligence in Private International Law” (online presentation, 25 June 2020), available here.
Conclusion
The interaction of AI with the PIL field brings interesting research questions on the table for legal scholars. As things currently stand, however, the EU’s legislative initiatives do not overcome the sempiternal difficulties experienced in PIL, namely the fragmented application of laws, and the difficulty to manage interactions between multiple legal texts because of their overlapping and extraterritorial effect.
Condividi:
Like this: