Examining Questions on AI and Liability with ITI Members and MEP Voss

With the publication of the European Commission’s White Paper on Artificial Intelligence on 19 February 2020, the policy debate around Artificial Intelligence – and how it will be regulated by EU institutions -- has greatly intensified in Brussels. Liability with respect to AI regulation remains one of the issues that raises the most questions from a legal point of view. ‘How to define risk categories associated to AI applications?’, ‘Should liability lay on software developers or deployers?’, ‘How does potential new regulation interact with existing civil liability legislation at EU and Member State level?’, and ‘What kind of liability regime is the most appropriate for AI?’ are among the areas industry partners, academics, and regulators seek to answer.

In an effort to address these fundamental questions, on 22 April 2020, ITI and Member of European Parliament (MEP) Axel Voss, who is in charge of drafting a report on AI liability for the European Parliament’s Legal Affairs committee, co-hosted a discussion with our member companies. Other panelists included Isabelle Buscke, Head of Brussels Office for the German Federal Consumer Association VZBV; Professor Dr. Peter Bräutigam, Partner at Noerr LLP; Corinna Schulze, Director for EU Government Relations and Global Corporate Affairs at SAP, and Jean-Marc Leclerc, Head of EU Affairs at IBM.

A central issue in this conversation was examining the definition of categories of risk. Participants generally agreed on the need for a risk-based approach to AI liability, but also pointed out that the model of high- versus low-risk AI applications introduced by the European Commission’s White Paper needs additional clarifications. While a sector-based approach may fail to take into account the diversity of AI applications and its context-specific risk factor, a prescriptive solution with, for example, a list of applications that are considered high-risk, may need constant revision and ultimately provide little legal certainty. Participants also discussed whether a high-risk application could be defined as posing risks to society at large, such as third parties that have no contractual link to developers or deployers of the AI. These applications may include autonomous driving or drones.

Another key topic was the question of who should be considered liable when it comes to damage caused by an AI system. Software developers, manufacturers, deployers, and operators are all participants in the AI supply chain and, depending on the context, might all share a degree of liability. It is difficult to reflect this variety into a liability regime. While this situation risks creating uncertainties for victims, the attribution of fault also needs to take into account possible misuses of the AI against private agreements in business-to-business contexts. Striking a balance between this diversity of interests will be key to define the most appropriate liability regime. The discussion subsequently touched upon the type of liability regime connected to high-risk and low-risk AI applications. A strict liability regime (independent of fault or negligence) may be needed to address the difficulty to attribute fault in the case of AI. Some participants noted that this approach may prove suitable for appropriately clarified high-risk AI applications. A system that requires a reversal of the burden of proof may be an alternative, on top of the general default liability rules.

Finally, panelists also reflected on the relationship of existing legislation with civil liability and AI. While some argued that an EU-level framework should harmonise existing legislation at Member State level to serve the Single Market, others claimed that civil law at Member State level should remain the main reference.

Overall, this fruitful debate laid out insightful perspectives on the key legal issues related to AI liability. Given the complexity of the topic and the variety of interests involved, it remains critically important that policymakers, industry, and civil society work together to ensure a framework for AI liability balances protection and innovation in the field.

Learn more about the tech industry’s role in addressing the economic and social implications of AI technology in a manner that supports innovation while safeguarding the public and individual interests at stake in ITI’s recommendations on the EU’s Strategy on Artificial Intelligence, released in February 2020.

Public Policy Tags: Artificial Intelligence

Related