EU Policymakers Nail Down Rules For AI Models
Following 22 hours of rigorous negotiations, European Union (EU) policymakers reached a provisional agreement on regulations for the most potent AI models. However, a substantial disagreement within the law enforcement section led to fatigued officials calling for a recess.
The AI Act is a significant bill aimed at regulating artificial intelligence’s potential to cause harm. It’s in the final stage of the legislative process, with the EU Commission, Council, and Parliament meeting to finalise the provisions. The negotiations, which began on December 6, lasted almost continuously for a day until a recess was called on Friday morning. The initial part focused on regulating powerful AI models.
The regulation’s definition of AI aligns with the OECD’s definition, with some variations. As part of the agreement, free and open-source software is excluded from the regulation, except for high-risk systems, prohibited applications, or AI solutions prone to manipulation. Upcoming discussions will cover the national security exemption and whether the regulation applies to AI systems already on the market that are undergoing significant changes.
A compromise document suggests maintaining a tiered approach with automatic categorisation as ‘systemic’ for models trained with computing power above a certain threshold. Transparency obligations will apply to all models, and the AI Act won’t affect free and open-source models with publicly available parameters, except for specific policy compliance and reporting requirements.
An AI Office will be established within the Commission to enforce foundation model provisions. National competent authorities will supervise AI systems, and an advisory forum and scientific panel will gather stakeholder feedback and advise on regulation enforcement.
The AI Act bans applications deemed to pose an unacceptable risk, including manipulative techniques, systems exploiting vulnerabilities, social scoring, and indiscriminate scraping of facial images. However, there are disagreements between the Parliament and the Council on the scope of banned applications, including biometric categorisation systems, predictive policing, emotion recognition software, and the use of Remote Biometric Identification (RBI). The issues also extend to whether these bans apply only within the EU or also to EU-based companies selling these applications abroad.