Skip to main content

Unpacking the latest news regarding the leading European AI regulation

On Friday, December 8th, the European Parliament, Council, and Commission reached a provisional agreement on a bill announcing sweeping changes for AI across the continent. Like 2018’s landmark GDPR legislation addressed safeguarding personal data, the EU’s new AI Act recognizes the inherent risks of rapidly accelerating artificial intelligence.

Key AI Act Takeaways

The European Union’s goal is that AI in Europe will be more secure, adhering to democratic human rights while letting businesses thrive and expand. The regulation balances mitigation with innovation, allowing AI to grow with humanity’s best interests.

  • Safeguards on general-purpose AI (GPAI) models and systems
  • Limitations for biometric identification systems by police departments
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities
  • Right of consumers to launch complaints and receive meaningful explanations
  • Fines ranging from €35 million or 7% of global turnover to €7.5 million or 1.5% of turnover

Banned AI Applications

Certain applications of artificial intelligence will be outright banned 6 months after enactment of the legislation. These include:

  • Biometric categorization systems that target sensitive characteristics, e.g., political, religious, gender, and race.
  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • Emotion recognition in offices or classrooms
  • Social scoring based on behavior or personal traits
  • AI that manipulates or exploits citizens’ vulnerabilities, e.g., age, disability, social or economic situations.

 “The EU is the first in the world to set robust regulation on AI, guiding its development and evolution in a human-centric direction. The AI Act sets rules for large, powerful AI models, ensuring they do not present systemic risks to the Union, and offers strong safeguards for our citizens and our democracies against any abuses of technology by public authorities. It protects our SMEs, strengthens our capacity to innovate and lead in the field of AI, and protects vulnerable sectors of our economy.” – Dragos Tudorache, Romanian MEP

An AI Act Promise to SMEs

The European Parliament ensures that corporations, specifically small and medium-sized enterprises, can create AI solutions without industry behemoths holding the keys to the value chain. Accordingly, the agreement promotes so-called regulatory sandboxes and real-world testing designed to train new AI before it’s market-ready.

Cranium: Ensuring AI Compliance

Now more than ever, it’s critical that your AI solutions are compliant. Per the new AI Act, any general-purpose AI model (GPAI) must adhere to transparency requirements, including technical documentation and compliance with EU copyright law. GPAI models with systemic risk must conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity, and report on energy efficiency.

Thankfully, AI compliance is one of our cornerstones at Cranium – we’ll always keep it top of mind when enabling your next AI solution.

Discover More

Leave a Reply