On 9 October 2020 Marco Bona represented PEOPIL (the Pan-European Organisation of Personal Injury Lawyers) as a speaker at the workshop “AI and liability” at the Second European AI Alliance Assembly, a stakeholder event organized by the European Commission, key to the Commission’s policymaking process in the field of AI. The first edition of the event, held in June 2019 with the participation of over 500 stakeholders, experts and policymakers, set a basis for important policy and legislation initiatives currently addressed by the Commission. This year’s edition hosted an online multi-participatory forum to discuss: -) the results of the Consultation on the AI White Paper launched by the European Commission from 19 February 2020 to 14 June 2020 and next policy and legislation steps; -) the finalised deliverables High-Level Expert Group on AI (AI HLEG); -) the future projections on the European AI Alliance as a multi stakeholder forum that reflects wider societal, economic and technical aspects of AI to the European policymaking process. As to the workshop attended by Marco Bona on “AI and liability”, the following prominent speakers were present: Hans INGELS (Head of Unit Free Movement of Goods – European Commission), Corinna SCHULZE (Director, EU Government Affairs SAP), Bernhard KOCH (University Professor, Department of Civil Law University of Innsbruck), Jean-Sébastien BORGHETTI (Professor Private law Université Paris II Panthéon-Assas, France) e Dirk STAUDENMAYER (Head of Unit Contract Law, European Commission). This session addressed the following issues: -) possible shortcomings of the Product Liability Directive (directive 85/374/EEC) and national liability rules for AI; -) possible changes to the Product Liability Directive with respect to AI; -) possible elements of the current national liability frameworks to be adapted to the challenges of AI. Marco Bona, on the ground of PEOPIL «Response to the EU consultation on Artificial Intelligence. Liability and insurance for personal injury and death damages caused by ai artefacts/systems» (September 2020, https://www.peopil.com/document/3692), outlined some shortcomings of the Product Liability Directive in relation to AI scenarios; as a consequence, according to PEOPIL document, he expressed the view that there is not any need to reinvent this directive which has proved to bring positive outcomes, but there are some margins for amending it. In particular, PEOPIL paper recommend the following points for possible reviews: -) the need for the victim to prove a “defect in the product” (it should be made it clear that, for the purpose of the reversal of the burden of proof on the defendant, it is sufficient for the injured party to allege that the product was “unsafe” and prove that it caused the harm); -) the absolute time period of 10 years period provided by Article 11 of Directive 85/374/EEC, given that an AI artefact/system may manifest its risks for the safety of persons after several years of “autonomous life”; -) the concept itself of “putting into circulation”, which does not take into account that AI products may change and be altered due to their “autonomous life” as created by the producer. Among the problematic points of the Product Liability Directive in relation to AI artefacts/things Marco Bona included the scope of the directive itself (limited to the liability of producers and the protection of consumers instead of “victims”) and the notion of “product”, which is critical as to internet of things and softwares. Reference has been made by Marco Bona to the recent comments on the limits of the Product Liability Directive by Andrea Bertolini’s document «Artificial Intelligence & Liability», published on July 2020, commissioned by the European Parliament Committee on Legal Affair. In line with PEOPIL document on AI, Marco Bona outlined the necessity of creating a new liability regime, based on strict liability, together with a system of mandatory insurance (inclusive of the direct right of action against the insurer) specifically addressing the liability of owners, operators and users in relation to accidental harm arising from the operation and/or use of AI artefacts/systems. As to the limits of the Product Liability Directive and the scenario of a new liability regime there was a general consensus among the speakers at the workshop. Moreover, Marco Bona expressed his positive opinion on the recent recommendations to the Commission on a Civil liability regime for artificial intelligence (2020/2014(INL)) approved on 5 October this year by the Committee on Legal Affairs of the European Parliament (the debate in plenary is scheduled for 19 October). Article 4, para.1, of the proposal provides that «The deployer of a high-risk AI-system shall be strictly liable for any harm or damage that was caused by a physical or virtual activity, device or process driven by that AI-system». At the same time, however, Marco Bona outlined some critical issues in relation to this proposal, including, first of all, the distinction between “high-risk” AI systems and “low-risk” AI systems (the latter excluded from the operational scope of the proposed regulation), as well as the failure to include, in Article 6 («Extent of compensation») of the proposal, the compensation for non-pecuniary (or “immaterial”) damages among the losses to be compensated for in personal injury and death cases. The vast majority of the partecipants to the Assembly think that a future regulation should not be limited to “high-risk” AI systems.