The EU AI Legislation Sets the Bar for Safety and Compliance

Baker Nanduru
Product Coalition
Published in
4 min readJun 17, 2023

--

The European Parliament passed AI legislation this week. Before the end of the year, this Act will be ratified by most EU nations and gets enacted. This is a significant milestone in finalizing the world’s first comprehensive law on artificial intelligence.

For budding AI creators, this is a crucial moment akin to a high student moment familiarizing themselves with the exam format of a prestigious college entrance test. Just as the student’s performance determines their college prospects, compliance with this new law holds significant consequences. Passing ensures access to desired opportunities, while cheating incurs severe penalties and failure to necessitates a retake.

The Test

This new law applies to anyone who places an AI system in the EU.

The law’s priority is to ensure AI systems are safe, transparent, traceable, non-discriminatory, and environmentally friendly. The people should oversee the AI systems rather than automation to prevent harmful outcomes. The legislation is based on a comprehensive AI definition and the associated risk categories.

Each AI system is classified into risk categories — Prohibitive, High, Low, Minimal, and General Purpose AI systems. Higher-risk systems face stricter requirements, with the highest risk level leading to a ban. Less risky systems focus on transparency obligations to ensure users are aware of interacting with an AI system, not a human being.

Any EU citizen can file a complaint against an AI System provider. EU member states will have a designated authority to review the complaints. AI creators will be fined a max of 7% of the worldwide total company turnover or $43 million, whichever is higher, for severe compliance breaches.

Requirements

Legal experts and startups will create compliance scorecards for AI creators in the next few months. Stanford Researchers have already evaluated foundational model providers, like ChatGPT, for compliance with the EU law Act. They have classified compliance under four categories.

  1. Data: This category mandates disclosure of data sources, data used, associated data governance measures, and copyrighted data used for training the model.
  2. Model: AI capabilities and limitation details, foreseeable risks, associated mitigations, industry benchmarks, and internal or external testing results must be provided.
  3. Compute: Disclose the computer power used for model creation and steps taken to reduce energy consumption.
  4. Deployment: Disclose the model’s availability in the EU market, present non-human-generated content to users, and provide the documentation for downstream compliance.

Most foundational model providers like OpenAI, stability.ai, Google, and Meta failed to comply with the new EU AI act. The top two reasons for non-compliance are copyright issues, where AI creators don’t disclose the copyright status of training data and lack of risk disclosure and mitigation plans. Compliance now requires disclosing all known risks and mitigation plans and providing evidence when risks cannot be mitigated.

Violations

Non-compliance from an AI system provider will result in fines. Here are the penalties based on risk category:

  • Prohibitive AI systems: €40 million or up to 7 percent of its worldwide annual turnover
  • High-risk AI systems: €20 million or up to 4 percent of its worldwide annual turnover
  • Any other topics like when incorrect, incomplete, or misleading information is provided to authorities, fines of up to €10 million or up to 2 percent of its worldwide annual turnover

The fines are smaller for SMB and startups AI creators

Implications

AI creators now have a regulatory compliance NorthStar. More compliance scorecards and tools will be available in the next six months, making compliance easier moving forward. Those aiming to commercialize in the EU must possess mature, compliant AI systems. While the EU’s rollout will be gradual, early compliance offers an advantage in capturing the EU market share.

Those who neglected safety by default despite having global ambitions must adapt quickly, despite the associated costs and time investments. Market leaders like Google, Meta, and Microsoft may hesitate to commercialize in the EU until their AI systems achieve compliance, requiring further investment in redesigning or fixing their AI systems. Additionally, they must consider environmentally friendly practices for model creation.

The US, Canada, the UK, and other major countries will face pressure to act. They can leverage the best parts of the EU AI Act to expedite their legislative timelines. Nevertheless, serious enactment of legislation is still at least two years away. The positive aspect is that they will find more willing collaborators among AI market leaders to refine and create a business-friendly, cost-effective regulation while prioritizing safety needs.

--

--

Transforming lives through technology. Checkout my product leadership blogs on medium and video series on youtube.com/@bakernanduru