Published: 02/09/25

The EU AI Act: What You Need to Know About Its New Rules

The EU AI Act

The EU AI Act was officially passed into law on the 1st August 2024 and is now in an implementation phase. You can find the European Parliament’s summary here. This act provides a formal governance structure for what AI use cases are deemed unacceptable by the European Union, as well as a cascading level of risk and associated controls for other use cases. A more detailed breakdown is available here.  

The EU AI Act allows for AI applications to be placed into the following four risk categories, based upon the capability or the use case the system is addressing: 

  • Unacceptable Risk (i.e. Banned Usages) 
  • High Risk 
  • Limited Risk 
  • Low or Minimal Risk 

The level of risk influences the amount of control, governance and transparency that the provider or developer of the application will need to apply. 

Banned AI applications in the EU include: 

  • Cognitive behavioural manipulation of people or specific vulnerable groups: for example, voice-activated toys that encourage dangerous behaviour in children 
  • Social scoring AI, which classifies people based on behaviour, socio-economic status or personal characteristics 
  • Biometric identification and categorisation of people 
  • Real-time and remote biometric identification systems, such as facial recognition in public spaces 

It also describes other use cases as “High Risk” – noting use cases that fall under EU Safety regulations and use cases that could affect fundamental rights, such as access to employment, or border control. Given the tales we are seeing from the US about the “recruitment bot” horror stories, such as McDonald’s AI system exposing 64 million applicants’ details, I find the employment use case to be especially interesting. Lower risk use cases are subject to lowered control and transparency requirements. 

 

 

EU AI Act Three Laws of Robotics graphic. On the left there are three robots representing 0 Law and 2 Law in blue, and 1 Law highlighted in orange. On the right, there is a robot lighting up EU AI ACT in text with text above reading,

 

 

The nerdy, sci-fi fanatic in me likes to think that perhaps this is us finally getting to our real-life version of Isaac Asimov’s “The Three Laws of Robotics”. On a quick due diligence check, I can confirm that there is nothing in the EU AI Act on the 2nd and 3rd laws of Robotics. However, the 1st and 0th law are arguably quite well represented here. I’ll admit, we’re getting to a down-the-boozer level of debate here, so I’ll stop dwelling on this point. 

Key Criticisms of the EU AI Act: What the Regulation Misses 

Given the generally positive outcomes for European citizens that the General Data Protection Regulation had in the 2010s, one surprising and specific criticism of the EU AI Act is its lack of provision for individuals negatively affected by AI to pursue legal avenues to complain about misuse in this scenario. I do like the idea of performing a Subject Access Request to find out which of my data was used to train an AI model at some point, but I don’t feel like spoiling anyone’s day right now. 

One issue that I was unable to find a straight answer on is that some of the most common worries people have about AI, like deep fakes, are only mentioned once, categorised under the “Limited Risk” scenario. This classification assumes that application developers will clearly inform users they’re interacting with a deep-fake. Although deep fakes are more likely to be exploited by criminal groups who consider themselves beyond the reach of the law, the EU AI Act at least places responsibility on public-facing app developers to ensure such interactions are clearly marked as non-human.  

The UK’s Stance on the EU AI Act and Its Own AI Regulation 

UK organisations offering AI capabilities to EU citizens must comply with the Act’s requirements. Cyber security providers, such as our partner CrowdStrike, have already responded to EU AI strategy proposals, highlighting the role of AI in threat detection and compliance.  

In terms of domestic scope, the UK currently does not have an equivalent legislation. From what I can see, existing proposals, such as the AI Regulation Bill, remain at the Private Member’s Bill stage, which means they’re not yet official government business (at the time of writing). That said, the Department of Science and Innovation has recently spawned an AI Security Institute (Security replaces Safety in the name, which is interesting in of itself), which will be “one to watch” over the next few months. 

As SEP2’s CEO Paul Starr recently observed in CRN, the UK’s most promising path forward may lie in Trusted AI workloads, those that prioritise responsible deployment, regulatory alignment, and practical safeguards.  

The Future of AI Regulation: A Growing Divide Between the EU and US 

The reality is that OpenAI, Google, Anthropic and other major players are ultimately US-based organisations. Based on how US companies have responded to previous EU regulations, such as GDPR and cookie-consent, we can likely predict their response will be in the form of either base compliance or (in some cases) restriction of access from European users.  

Given the current makeup of the legislative and executive branches, meaningful regulation from the US seems unlikely in the near future. Add to that the CLOUD Act (“Your data is also our data”), and we’re looking at a growing divergence between European and American data interests and growing European data sovereignty.  

That said, this divergence only becomes meaningful if Europe “seizes the moment” and builds compelling cloud platforms and Artificial Intelligence products of its own – which is quite frankly, rather unlikely to occur on a Silicon Valley scale.