06/05/2021
Briefing

The AI Regulation proposes to introduce a comprehensive regulatory framework for Artificial Intelligence (“AI”) in the EU. The aim is to establish a legal framework that provides the legal certainty necessary to facilitate innovation and investment in AI, while also safeguarding fundamental rights and ensuring that AI applications are used safely. The main provisions of the AI Regulation are the introduction of:

  1. Binding rules for AI systems that apply to providers, users, importers, and distributors of AI systems in the EU, irrespective of where they are based.
  2. A list of certain prohibited AI systems.
  3. Extensive compliance obligations for high-risk AI systems.
  4. Fines of up to EUR 30 million or up to 6% of annual turnover, whichever is higher.

The Commission proposes a risk–based approach based on the level of risk presented by the AI system, with different levels of risk attracting corresponding compliance requirements. The risk categories include (i) unacceptable risk (these AI systems are prohibited); (ii) high-risk; (iii) limited risk; and (iv) minimal risk.

Scope of the AI Regulation

Application to Providers and Users

The AI Regulation proposes a broad regulatory scope, covering all aspects of the lifecycle of the development, sale and use of AI systems. The AI Regulation will apply to:

  • providers that place AI systems on the market or put AI systems into service, regardless of whether those providers are established in the EU or in a third country;
  • users of AI systems in the EU; and
  • providers and users of AI systems that are located in a third country where the output produced by the system is used in the EU.

Therefore, the AI Regulation will apply to actors both inside and outside the EU as long as the AI system is placed on the market in the EU or its use affects people located in the EU.

Definition of AI system

The AI Regulation defines “AI systems” broadly as software that is developed with machine learning, logic, and knowledge-based or statistical approaches, and that “can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.

Prohibited AI Systems

The AI Regulation lists a number of AI systems which the Commission believe bear an unacceptable risk as they contravene EU values and violate fundamental rights, and therefore are explicitly prohibited. These AI systems include:

  • AI systems that deploy subliminal techniques to exploit vulnerabilities of a specific group of persons to materially distort the behaviour of a person belonging to the group in a manner that causes physical or psychological harm.
  • The use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons based on their social behaviour or characteristics where the social score generated leads to the detrimental or unfavourable treatment of certain groups of persons.
  • AI systems used for real-time remote biometric identification in publicly accessible spaces for the purposes of law enforcement, unless it is strictly necessary for a targeted crime search or the prevention of substantial threats. This particular prohibition has likely been introduced to address concerns raised by both the European Parliament and the Commission in 2020 in connection with a facial recognition app developed by Clearview to allow clients such as US law enforcement authorities to match photos of unknown people to images of them found online. This technology would now fall within the definition of AI which poses an unacceptable risk under the AI Regulation and will therefore be prohibited.

High Risk AI Systems

The AI Regulation contains specific requirements for so-called “high-risk” AI systems.

The definition of a high-risk AI system

The term “high-risk AI” is not defined, but Articles 6 and 7 of the AI Regulation indicate the criteria used to determine whether a system should be considered high risk.

  • Article 6 refers to AI systems intended to be used as a safety component of products (or which are themselves a product). This includes products or components that are covered by existing EU product safety legislation that are listed in Annex II to the AI Regulation.
  • Article 7 refers to stand-alone AI systems whose use may have an impact on the fundamental rights of natural persons. These systems are listed in Annex III and include, for example, real-time and “post” biometric identification systems, education and vocational training, employment, law enforcement, migration, asylum and border control, and administration of justice and democratic processes. The list currently included in Annex III may be expanded in the future to cover other AI systems which the Commission considers to present similarly high risks of harm.

General requirements applicable to high-risk AI systems

The AI Regulation imposes the following general requirements on high-risk AI systems:

  • Transparency: High-risk AI systems must be designed and developed to ensure that the system is sufficiently transparent to enable users to interpret its output and use it appropriately;
  • Human oversight: High-risk AI systems must be designed and developed in such a way that there is human oversight of the system, aimed at minimising risks to health, safety and fundamental rights;
  • Risk management system: A risk management system must be established and maintained throughout the lifetime of the system to identify and analyse risks and adopt suitable risk management measures;
  • Training and testing: Data sets used to support training, validation and testing must be subject to appropriate data governance and management practices and must be relevant, representative, accurate and complete;
  • Technical documentation: Complete technical documentation that demonstrates compliance with the AI Regulation must be in place before the AI system is placed on the market and must be maintained throughout the lifecycle of the system; and
  • Security: A high level of accuracy, robustness and security must consistently be ensured throughout the lifecycle of the high-risk AI system.

Requirements applicable to providers of high-risk AI

The AI Regulation imposes the following specific requirements on the provider of a high-risk AI system:

  • Compliance: Ensure compliance with the requirements for high-risk AI systems (outlined above);
  • Conformity assessment: Ensure the system undergoes the relevant conformity assessment procedure (prior to the placing the system on the market/putting the system into service);
  • Corrective action and notification: Immediately take corrective action to address any suspected non-conformity and notify relevant authorities of such non-conformity;
  • Quality management system: Implement a quality management system, including a strategy for regulatory compliance, and procedures for design, testing, validation, data management, and recordkeeping;
  • Registration: Register the AI system in the AI database before placing a high-risk AI system on the market; and
  • Post-market monitoring: Implement and maintain a post-market monitoring system, by collecting and analysing data about the performance of high-risk AI system throughout the system’s lifetime. This includes obligations to report any serious incident or any malfunctioning of the AI system, which would constitute a breach of obligations under EU laws intended to protect fundamental rights.

Requirements applicable to users of high-risk AI

The AI Regulation imposes more limited but notable obligations on users of high-risk AI systems, including:

  • to use the systems in accordance with the instructions of the provider and implement all technical and organisational measures stipulated by the provider to address the risks of using the high-risk AI system;
  • ensure all input data is relevant to the intended purpose;
  • monitor operation of the system and notify the provider about serious incidents and malfunctioning; and
  • maintain logs automatically generated by the high-risk AI system, where those logs are within the control of the user.

All Other AI Systems

Other AI systems which do not qualify as prohibited or high-risk AI systems are not subject to any specific requirements. In order to facilitate the development of “trustworthy AI”, the Commission has stated that providers of “non-high-risk” AI systems should be encouraged to develop codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems.

For certain AI systems which pose a limited risk, transparency requirements are imposed. For example, AI systems which are intended to interact with natural persons must be designed and developed in such a way that users are informed they are interacting with an AI system, unless it is “obvious from the circumstances and the context of use.”  This transparency obligation would apply in the context of the use of chatbots, for example.

All other “minimal risk” AI systems can be developed and used subject to existing legislation without additional legal obligations. The vast majority of AI systems currently used in the EU fall into this category. Voluntarily, providers of those systems may choose to apply the requirements for trustworthy AI and adhere to voluntary codes of conduct.

Enforcement

  • European Artificial Intelligence Board (“EAIB”): The AI Regulation provides for the establishment the EAIB, to advise and assist the Commission in connection with the AI Regulation. The EAIB facilitate effective cooperation between the national supervisory authorities and the Commission, coordinate and contribute to guidance by the Commission and assist the national supervisory authorities and the Commission to ensure consistent application of the Regulation.
  • National competent authorities: Member States must designate national competent authoritiesand a national supervisory authority responsible for providing guidance and advice on the AI Regulation.
  • Enforcement: Member State authorities are required to conduct market surveillance of AI systems. If an authority believes that an AI system presents a risk to health, safety or fundamental rights, the authority must carry out an evaluation of the AI system and where necessary, impose corrective action.
  • Sanctions: Infringement of the AI Regulation is subject to financial sanctions of up to €10m – €30m or 2% – 6% of the global annual turnover, whichever is higher. The level of fine imposed depends on the nature of the infringement.

The AI Regulation will be enforced by supervisory authorities and does not provide for a complaint system or direct enforcement rights for individuals. It is unclear whether Member States will appoint data protection supervisory authorities, national standards agencies or other agencies to perform the “competent authority” role. Notably the AI Regulation does not replicate the “one stop shop” system under GDPR which may lead to concerns about consistency and cooperation across the 27 Member States.

Next Steps

The proposal now goes to the European Parliament and the Council of Europe for further consideration and debate. Once adopted, the Regulation will come into force 20 days after its publication in the Official Journal. The Regulation will apply 24 months after that date, but some provisions may apply sooner.

Conclusion

The AI Regulation is being widely heralded as the new “GDPR for AI” and it certainly represents a comprehensive and brave move by the Commission to lead the way in one of the most rapidly developing areas of technology since the creation of the Internet. The rapid evolution and deployment of AI into IOT devices, vehicles, mobile devices, retail, medical and other spheres creates huge opportunities to advance the state of the art. However, it also has the potential for enormous harms to a broad range of human rights, from personal safety, privacy, equality and beyond. By introducing a risk based approach for producers and users of AI Systems and applying extra-territorial effect, the EU has provided a welcome legal and ethical framework that, as with the GDPR, may well become the global standard.