Artificial Intelligence Act: Council calls for promoting safe AI that respects fundamental rights

The Council has adopted its common position (‘general approach’) on the Artificial Intelligence Act. Its aim is to ensure that artificial intelligence (AI) systems placed on the EU market and used in the Union are safe and respect existing law on fundamental rights and Union values.

“Artificial Intelligence is of paramount importance for our future. Today, we managed to achieve a delicate balance which will boost innovation and uptake of artificial intelligence technology across Europe. With all the benefits it presents, on the one hand, and full respect of the fundamental rights of our citizens, on the other.”

Ivan Bartoš, Czech Deputy Prime Minister for digitalisation and minister of regional development

The draft regulation presented by the Commission in April 2021 is a key element of the EU’s policy to foster the development and uptake across the single market of safe and lawful AI that respects fundamental rights.

The proposal follows a risk-based approach and lays down a uniform, horizontal legal framework for AI that aims to ensure legal certainty.  It promotes investment and innovation in AI, enhances governance and effective enforcement of existing law on fundamental rights and safety, and facilitates the development of a single market for AI applications. It goes hand in hand with other initiatives, including the Coordinated Plan on Artificial Intelligence which aims to accelerate investment in AI in Europe.

Definition of an AI system

To ensure that the definition of an AI system provides sufficiently clear criteria for distinguishing AI from simpler software systems, the Council’s text narrows down the definition to systems developed through machine learning approaches and logic- and knowledge-based approaches.

Prohibited AI practices

Concerning prohibited AI practices, the text extends to private actors the prohibition on using AI for social scoring. Furthermore, the provision prohibiting the use of AI systems that exploit the vulnerabilities of a specific group of persons now also covers persons who are vulnerable due to their social or economic situation.

As regards the prohibition of the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces by law enforcement authorities, the text clarifies the objectives where such use is strictly necessary for law enforcement purposes and for which law enforcement authorities should therefore be exceptionally allowed to use such systems.

Classification of AI systems as high-risk

Regarding the classification of AI systems as high-risk, the text adds a horizontal layer on top of the high-risk classification, to ensure that AI systems that are not likely to cause serious fundamental rights violations or other significant risks are not captured.

Requirements for high-risk AI systems

Many of the requirements for high-risk AI systems have been clarified and adjusted in such a way that they are more technically feasible and less burdensome for stakeholders to comply with, for example as regards the quality of data, or in relation to the technical documentation that should be drawn up by SMEs to demonstrate that their high-risk AI systems comply with the requirements.

Since AI systems are developed and distributed through complex value chains, the text includes changes clarifying the allocation of responsibilities and roles of the various actors in those chains, in particular providers and users of AI systems. It also clarifies the relationship between responsibilities under the AI Act and responsibilities that already exist under other legislation, such as the relevant Union data protection or sectorial legislation, including as regards the financial services sector.

General purpose AI systems

New provisions have been added to account of situations where AI systems can be used for many different purposes (general purpose AI), and where general purpose AI technology is subsequently integrated into another high-risk system.

The text specifies that certain requirements for high-risk AI systems would also apply to general purpose AI systems in such cases. However, instead of direct application of these requirements, an implementing act would specify how they should be applied in relation to general purpose AI systems, based on a consultation and detailed impact assessment and considering specific characteristics of these systems and related value chain, technical feasibility and market and technological developments.

Scope and provisions relating to law enforcement authorities

An explicit reference has been made to the exclusion of national security, defence, and military purposes from the scope of the AI Act. Similarly, it has been clarified that the AI Act should not apply to AI systems and their outputs used for the sole purpose of research and development and to obligations of people using AI for non-professional purposes, which would fall outside the scope of the AI Act, except for the transparency obligations.

Considering the specificities of law enforcement authorities, several changes have been made to provisions relating to the use of AI systems for law enforcement purposes. Notably, subject to appropriate safeguards, these changes are meant to reflect the need to respect the confidentiality of sensitive operational data in relation to their activities.

Compliance framework and AI Board

To simplify the compliance framework for the AI Act, the text contains several clarifications and simplifications to the provisions on the conformity assessment procedures.

The provisions related to market surveillance have also been clarified and simplified to make them more effective and easier to implement. The text also substantially modifies the provisions concerning the AI Board, aiming to ensure that it has greater autonomy and to strengthen its role in the governance architecture for the AI Act. In order to ensure the involvement of the stakeholders in relation to all issues related to the implementation of the AI Act, including the preparation of implementing and delegated acts, a new requirement has been added for the Board to create a permanent subgroup serving as a platform for a wide range of stakeholders.

As regards penalties for infringements of the provisions of the AI Act, the text provides for more proportionate caps on administrative fines for SMEs and start-ups.

Transparency and other provisions in favour of the affected persons

The text includes several changes that increase transparency regarding the use of high-risk AI systems. Notably, some provisions have been updated to indicate that certain users of a high-risk AI system that are public entities will also be obliged to register in the EU database for high-risk AI systems.

Moreover, a newly added provision puts emphasis on an obligation for users of an emotion recognition system to inform natural persons when they are being exposed to such a system.

The text also makes it clear that a natural or legal person may make a complaint to the relevant market surveillance authority concerning non-compliance with the AI Act and may expect that such a complaint will be handled in line with the dedicated procedures of that authority.

Measures in support of innovation

With a view to creating a legal framework that is more innovation-friendly and to promoting evidence-based regulatory learning, the provisions concerning measures in support of innovation have been substantially modified in the text.

Notably, it has been clarified that AI regulatory sandboxes, which are supposed to establish a controlled environment for the development, testing and validation of innovative AI systems, should also allow for testing of innovative AI systems in real world conditions.

Furthermore, new provisions have been added allowing unsupervised real-world testing of AI systems, under specific conditions and safeguards. In order to alleviate the administrative burden for smaller companies, the text includes a list of actions to be undertaken to support such operators, and it provides for some limited and clearly specified derogations.

Next steps

The adoption of the general approach will allow the Council to enter negotiations with the European Parliament (‘trilogues’) once the latter adopts its own position with a view to reaching an agreement on the proposed regulation.