The European Union has successfully concluded negotiations on the first-ever law dedicated to artificial intelligence, known as the AI act. This legislative framework aims to ensure the safety and ethical use of AI systems while upholding the EU’s fundamental rights and values. The agreement, reached after extensive deliberations between the Council and the European Parliament, positions the EU as a global leader in AI regulation, setting a precedent for other jurisdictions.
Why the push for an AI Act?
AI, though not new, has undergone transformative advances fuelled by increased computing power, abundant data, and cutting-edge software. Its integration into everyday life is evident in applications like virtual assistance, medical diagnoses, automated translations, navigation tools, manufacturing quality control, and natural disaster prediction.
The multifaceted impact of AI extends to fostering innovation, efficiency, sustainability, and competitiveness in the economy. Simultaneously, it plays a crucial role in enhancing safety, education, and healthcare, contributing to the global fight against climate change. Acknowledging the dual nature of AI, the EU champions its development while emphasizing the importance of ethical and human-centric practices to mitigate potential risks.
Key points of the EU’s AI Act
The AI act classifies AI systems into four levels of risk, with corresponding rules and obligations. The higher the risk associated with an AI application, the more stringent the regulations.
- No or minimal risks: The majority of AI systems, posing minimal or no risks, will not face regulations, allowing their continued use.
- Limited risks: AI systems with limited risks will be subject to light transparency obligations, such as disclosing that their content is AI-generated.
- High risks: High-risk AI systems will be authorized with specific requirements and obligations for market access.
- Unacceptable risks: Certain AI applications, including cognitive behavioural manipulation, predictive policing, emotion recognition in workplaces and educational institutions, and social scoring, are deemed unacceptable and will be banned in the EU. Facial recognition for remote biometric identification will also be prohibited with limited exceptions
he AI act not only emphasizes governance and enforcement of fundamental rights but also aims to promote investment and innovation in AI within the EU. Provisions are included to support AI innovation, aligning with other initiatives like the EU’s coordinated plan on artificial intelligence.
Timeline of AI Act development
October 2020: European Council discusses the digital transition and calls for increased investments in AI research.
April 2021: European Commission proposes the AI act, aiming to harmonize rules and improve trust in AI.
December 6, 2022: Council adopts its position on new rules for AI, emphasizing the importance of safe and lawful AI.
December 9, 2023: Council and Parliament negotiators reach a provisional agreement on the AI act after marathon talks.
Global impact
The EU’s AI act is poised to set a global standard for AI regulation, akin to the General Data Protection Regulation (GDPR) for data privacy. By addressing the ethical and safety concerns associated with AI, the EU aims to lead in shaping a responsible and trustworthy global AI landscape. This historic agreement marks a significant step towards ensuring that AI technologies align with human values, fundamental rights, and contribute positively to society.