(May 26, 2021) On April 21, 2021, the European Commission published a legislative proposal for an Artificial Intelligence Act (AI Act). The AI Act addresses the risks of AI systems to the safety or fundamental rights of citizens by following an approach based on risks ranging from unacceptable to minimal. Whereas certain harmful AI-enabled practices would be completely prohibited, high-risk AI systems would be subject to mandatory requirements. Providers of certain AI systems posing a low risk would be subject to transparency rules or could voluntarily comply with the rules for high-risk AI systems. The AI Act is complemented by a proposal for a Regulation on Machinery Products, which is meant to ensure the safe integration of the AI system into the overall machinery.
A European Union (EU) regulation is directly applicable in the EU member states once it enters into force and does not need to be transposed into national law. (Consolidated Version of the Treaty on the Functioning of the European Union (TFEU) art. 288, para. 2.)
Content of the Artificial Intelligence Act
Definition of AI
The AI Act defines an artificial intelligence system (AI system) as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” (AI Act art. 3(1).) The approaches listed in the annex are machine learning approaches, logic- and knowledge-based approaches, and statistical approaches. The list can be adapted by the European Commission to cover new market and technological developments. (Art. 4.)
Scope
The AI Act would apply to providers that place AI systems on the EU market or put them into service, irrespective of where they are located; to users of AI systems located within the EU; and to providers and users of AI systems in a third country, where the output produced by the AI system is used in the EU.
Risk-Based Regulation – Unacceptable Risk
The AI Act would use a risk-based approach with four levels of risk: unacceptable risk, high-risk, limited risk, and minimal/no risk. Certain AI systems that cause or are likely to cause a person physical or psychological harm would be completely prohibited. The following AI systems would not be allowed to be put on the market, put into service, or used:
- AI systems that deploy subliminal techniques beyond a person’s consciousness
- AI systems that exploit any of the vulnerabilities of a specific group of persons due to their age, physical, or mental disability
- AI systems by public authorities providing social scoring of natural persons over a certain period of time for general purposes leading to detrimental or unfavorable treatment
- Real-time remote biometric identification systems used in publicly accessible spaces for the purpose of law enforcement, with some limited exceptions (Art. 5, para. 1.)
When an exception for the use of real-time remote biometric identification systems by law enforcement applies, the systems would have to comply with additional safeguards, such as necessity, proportionality, and prior authorization by a judicial or an independent administrative authority. (Art. 5, paras. 2–4.)
High-Risk AI Systems
AI systems intended for use as safety components of products that are subject to third-party ex-ante conformity assessment and stand-alone AI systems that have implications for fundamental rights and are explicitly listed in Annex III to the AI Act would be classified as high-risk. (Art. 6.) The risk classification would depend on the intended purpose of the AI system. The system would have to be registered by the provider in the EU Database for Stand-Alone High-Risk AI Systems before it could be placed on the market or put into service. (Arts. 51, 60.) High-risk AI systems would be subject to mandatory requirements regarding the quality of data sets used; technical documentation; record keeping; transparency and the provision of information to users; human oversight; and robustness, accuracy, and cybersecurity. (Arts. 8–15.) In addition, a conformity assessment would have to be performed before they could be placed on the EU market. (Art. 19.)
Annex III lists AI systems in the following sectors as high-risk:
- biometric identification and categorization of natural persons
- management and operation of critical infrastructure
- access to education and vocational training and assessing students for these purposes
- employment, workers management, and access to self-employment (recruitment, promotion, termination)
- access to and enjoyment of essential private services and public services and benefits
- law enforcement (individual risk assessments, polygraphs or similar tools, deep fake detection, evaluation of the reliability of evidence, predictive policing, profiling, crime analytics regarding natural persons)
- migration, asylum, and border control management
- administration of justice and democratic processes
The European Commission would be authorized to expand the list if necessary, subject to a set of defined criteria. (Art. 7.)
Limited and Non-High-Risk AI Systems
Certain AI systems intended to interact with natural persons, used to detect emotions or determine association with social categories on the basis of biometric data, or generate or manipulate content (deep fakes) would have to comply with various transparency requirements so that users are aware that they are interacting with a machine. (Art. 52.) Providers of AI systems that are not high-risk are encouraged to adopt codes of conduct and apply the mandatory requirements for high-risk AI systems voluntarily. (Art. 69.)
Penalties for Infringement
EU member states would be obligated to provide for “effective, proportionate and dissuasive penalties” for infringements. The AI Act lays down certain thresholds for administrative fines depending on the type of infringement. For using prohibited AI practices or not complying with requirements on data, fines are a maximum of 30 million euros (€) (about US$36.5 million) or 6% of the total worldwide annual turnover; for noncompliance with other requirements, the maximum fine is €20 million (about US$24.3 million) or 4% of the total worldwide annual turnover. For supplying incorrect, incomplete, or misleading information to notified bodies and national competent authorities in reply to a request, fines of up to €10 million (about US$12.2 million) or 2% of the total worldwide annual turnover could be imposed. (Art. 71.)
Measures in Support of Innovation
To promote innovation, the AI Act calls on national authorities and the European Data Protection Supervisor to establish AI regulatory sandboxes, meaning “a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan.” (Art. 53.) Furthermore, the AI Act would provide specific measures to reduce the regulatory burden on and support micro or small enterprises and users, as well as start-ups. (Art. 55.)