30/03/2021
Briefing

While data protection and privacy have been the focus of EU regulators in recent years, artificial intelligence is a more recent area of focus for the EU legislative agenda in the technology space. While the GDPR, being technology-neutral in its application, will apply to artificial intelligence (“AI”) applications, products and services, there are currently no specific EU laws to regulate AI.

There were a number of EU publications on the regulation of AI in 2020, arising from the European Commission’s (the “Commission”) intention to accelerate digital development in the EU through its adoption of a pan-European Digital Strategy. In February 2020, the Commission published a report on the safety and liability implications of AI (the “Report”) and a white paper on AI (the “White Paper”) in order to address the gap that exists in relation to specific AI regulation in the EU. Following this, in October 2020, the European Parliament (the “Parliament”) adopted the texts of legislative proposals to introduce a legal and ethical framework for AI and a civil liability framework in respect of AI products. The Commission’s long-awaited proposal for the regulation of AI is now set to be published in Q2 of 2021.

The Commission’s Report and White Paper

The Report: Liability Issues

The Commission’s Report identified and examined the broader implications for, and potential gaps in, EU liability and safety frameworks for AI. AI is already subject to existing EU legislation such as data protection, non-discrimination, consumer protection, and product safety and liability rules. However, certain features of AI including the perceived lack of transparency and partially autonomous behaviour, are not easily captured by existing legislation. Therefore, the Report recognises it may be difficult to offer compensation to victims of damage caused by AI under the existing liability framework.

The Commission noted that when formulating future AI regulations, victims of damage attributable to AI products and services must enjoy the same level of protection as that afforded to victims of damage attributable to non-AI products and services under the existing laws.

The Report concluded by stating that adjustments to the Product Liability Directive and national liability regimes may be appropriate to ensure EU and national liability laws can deal with AI in the future.

The White Paper: Regulating “High Risk” AI

The White Paper contains proposed measures and regulations to promote and support the development and use of AI. The White Paper addresses the types of legal changes it may recommend including:

  • Updates to existing consumer protections laws to ensure that they continue to apply to AI products and services; and
  • New laws to regulate “high-risk” AI. Any AI not classed as “high-risk” would be subject to existing laws.

While the White Paper lacks some detail on what “high-risk” AI could mean, the Commission considers AI systems that can affect the rights of an individual or company legally or in a similarly significant way, or that pose risk of injury, death or damage, are the types of applications that would be considered high risk.  Three examples are given, namely the use of AI:

  • in recruitment processes;
  • for biometric identification; and
  • for surveillance.

Mandatory legal requirements would apply to “high-risk” applications, such as:

  • Data and record-keeping: The requirement to keep records of data used to train and test AI systems.
  • Training data: Any data used to train AI systems would have to respect the EU’s rules and values.
  • Human oversight: The required level of oversight could include validation by a human and/or monitoring, depending on the intended use and effects of the AI system.

Consultation on Report and White Paper: Concerns Identified

A consultation was issued on foot of the Report and White Paper, enabling European citizens, Member States and relevant stakeholders to provide their opinions. The main concerns expressed by respondents’ included:

  • the possibility that AI may breach fundamental rights of EU citizens;
  • the possibility that the use of AI may lead to discriminatory outcomes; and
  • the possible lack of compensation following harm caused by AI.

In general, respondents strongly agreed with the White Paper’s proposals to regulate AI. The majority of respondents agreed with the Commission’s proposals regarding:

  • the introduction of clear liability and safety rules;
  • the introduction of information requirements on the nature and purpose of AI systems;
  • the fact that the regulation of AI must include human oversight; and
  • the requirement for the maintenance of records and data in AI regulation.

European Parliament’s Legislative Proposals in Respect of AI

On 20 October 2020, the Parliament adopted the texts of legislative proposals introducing a legal and ethical framework for AI and a civil liability framework in respect of the development, deployment and use of AI, robotics and related technologies (the “Regulation”). The purpose of the Regulation is to establish a comprehensive and future-proof EU regulatory framework of ethical principles and legal obligations for the development, deployment and use of AI in Europe.

The scope of the Regulation

The Regulation would apply to AI “developed, deployed or used” in Europe, regardless of whether the software, algorithms or data used or produced by such technologies are located outside of the EU or do not have a specific geographical location. The Regulation also regulates not only the developers of AI, but also the deployers and users of those AI products and applications.

The Regulation applies to “artificial intelligence”, “robotics” and “related technologies”, including software, algorithms and data used or produced by such technologies, developed, deployed or used in the EU.

The Regulation proposes that any processing of personal data carried out in the development, deployment and use of AI shall be carried out in accordance with the GDPR and the Privacy and Electronic Communications Directive.

Liability Framework in Respect of AI Products

The Regulation sets out rules for civil liability claims by natural and legal persons against operators of AI systems. The scope of liability under the Regulation is broad, applying where “a physical or virtual activity, device or process driven by an AI system has caused harm or damage to the life, health, physical integrity of a natural person, to the property of a natural or legal person or has caused significant immaterial harm resulting in a verifiable economic loss“.

The Regulation would introduce:

  • a strict liability regime for “high-risk AI systems”, defined as AI systems with significant potential to cause harm or damage in a manner that is random and goes beyond what can reasonably be expected; and
  • a fault-based liability regime for AI systems that do not constitute “high-risk AI systems”.

It is proposed that an operator of a non-high-risk AI system will not be liable if he or she can prove that the harm or damage was caused without his or her fault, by relying on a lack of knowledge that the AI system was activated or by demonstrating that due diligence was observed.

What Lies Ahead in 2021?

The Commission must now respond to the Parliament’s legislative proposals. Respondents’ consultative feedback on the Commission’s Report and White Paper will form part of a broader stakeholder consultation process that will contribute to the Commission’s preparation of regulatory options. Following detailed analysis of respondents’ feedback and the carrying-out of an impact assessment, the Commission is expected to present an AI regulatory proposal in Q2 of 2021.

Once published, the other EU institutions will review the Commission’s proposal and make changes. The European Parliament has already indicated that it will be looking for a broader scope, potentially legislating beyond only “high-risk” AI applications, whereas the Member States appear to be more aligned with the Commission’s approach in the White Paper.