On March 13th, 2024, the Parliament of the European Union adopted the AI Act, a significant legislation aimed at regulating Artificial Intelligence (AI) systems across various sectors. This article intends to offer a concise overview of the key aspects of the AI Act, focusing on its classification of AI according to risk levels and prohibited AI systems.
The AI Act employs a risk-based approach to regulating AI systems. Based on this, the AI Act categorizes AI systems based on their risk levels, dividing them into four main categories:
- Unacceptable Risk: Prohibited AI systems, for example systems that employ manipulative or deceptive techniques, exploiting vulnerabilities, or inferring sensitive attributes.
- High-Risk: AI systems subject to stringent regulations, including those used in critical infrastructure and law enforcement. (Note that providers of high-risk AI systems must adhere to comprehensive obligations, including establishing a risk management system, ensuring data governance, and conducting model evaluations.)
- Limited Risk: AI systems with lighter transparency obligations, such as chatbots and deepfakes.
- Minimal Risk: AI system that are unregulated by the AI Act, voluntary codes of conduct may be adopted. These systems include typical use of AI in video games or advanced spam filters.
Prohibited AI Systems
In this blog post we are going to focus on the prohibited AI Systems (systems that have unacceptable risk). The AI Act provides a six-month deadline for every market participant to review their AI systems to determine if it belongs to the unacceptable risk category.
These systems include the following:
- AI systems using subliminal, manipulative, or deceptive techniques to distort behavior and impair decision-making.
- AI systems exploiting vulnerabilities related to age, disability, or socio-economic situations, causing harm.
- Biometric systems inferring sensitive attributes, except for lawful law enforcement purposes.
- AI systems evaluating individuals based on personal characteristics, leading to unjust treatment.
- ‘Real-time’ remote biometric identification in public spaces for law enforcement, except for specific necessary objectives.
- AI systems assessing criminal risk based solely on profiling or personality traits.
- AI systems creating facial recognition databases through untargeted scraping.
- AI systems inferring emotions in workplaces or educational institutions, except for medical or safety reasons.
Please keep in mind that establishing the risk level of an AI system is a complex audit process. The above list is not exhaustive and is somewhat simplified for the sake of comprehension.
Do not miss this deadline
We cannot emphasize enough the importance of performing a preliminary audit regarding your AI system within the deadline set out by the AI Act. Missing this, could have grave consequences.
Operators of prohibited AI systems may be imposed an administrative fine of up to EUR 35 million or up to 7% of the total worldwide annual turnover of the operator for the preceding financial year (whichever is higher).
For this reason, we urge every AI system operator to act as soon as they can and have their AI systems audited within the six-month deadline.
Conclusion
The AI Act represents a significant step towards regulating AI technologies in the European Union, aiming to ensure their responsible and ethical deployment. By classifying AI systems according to risk levels and imposing obligations on providers, the Act fosters trust, transparency, and accountability.
At the same time the regulation of the AI Act is very strict and in some places difficult to understand and to be compliant with. If you require assistance in auditing your AI system to see whether it is not prohibited or if you seek help to change the risk level of your AI system, do not hesitate to contact Neuron Solutions and Kinstellar.