The nerdy, sci-fi fanatic in me likes to think that perhaps this is us finally getting to our real-life version of Isaac Asimov’s “The Three Laws of Robotics”. On a quick due diligence check, I can confirm that there is nothing in the EU AI Act on the 2nd and 3rd laws of Robotics. However, the 1st and 0th law are arguably quite well represented here. I’ll admit, we’re getting to a down-the-boozer level of debate here, so I’ll stop dwelling on this point.
Key Criticisms of the EU AI Act: What the Regulation Misses
Given the generally positive outcomes for European citizens that the General Data Protection Regulation had in the 2010s, one surprising and specific criticism of the EU AI Act is its lack of provision for individuals negatively affected by AI to pursue legal avenues to complain about misuse in this scenario. I do like the idea of performing a Subject Access Request to find out which of my data was used to train an AI model at some point, but I don’t feel like spoiling anyone’s day right now.
One issue that I was unable to find a straight answer on is that some of the most common worries people have about AI, like deep fakes, are only mentioned once, categorised under the “Limited Risk” scenario. This classification assumes that application developers will clearly inform users they’re interacting with a deep-fake. Although deep fakes are more likely to be exploited by criminal groups who consider themselves beyond the reach of the law, the EU AI Act at least places responsibility on public-facing app developers to ensure such interactions are clearly marked as non-human.
The UK’s Stance on the EU AI Act and Its Own AI Regulation
UK organisations offering AI capabilities to EU citizens must comply with the Act’s requirements. Cyber security providers, such as our partner CrowdStrike, have already responded to EU AI strategy proposals, highlighting the role of AI in threat detection and compliance.
In terms of domestic scope, the UK currently does not have an equivalent legislation. From what I can see, existing proposals, such as the AI Regulation Bill, remain at the Private Member’s Bill stage, which means they’re not yet official government business (at the time of writing). That said, the Department of Science and Innovation has recently spawned an AI Security Institute (Security replaces Safety in the name, which is interesting in of itself), which will be “one to watch” over the next few months.
As SEP2’s CEO Paul Starr recently observed in CRN, the UK’s most promising path forward may lie in Trusted AI workloads, those that prioritise responsible deployment, regulatory alignment, and practical safeguards.
The Future of AI Regulation: A Growing Divide Between the EU and US
The reality is that OpenAI, Google, Anthropic and other major players are ultimately US-based organisations. Based on how US companies have responded to previous EU regulations, such as GDPR and cookie-consent, we can likely predict their response will be in the form of either base compliance or (in some cases) restriction of access from European users.
Given the current makeup of the legislative and executive branches, meaningful regulation from the US seems unlikely in the near future. Add to that the CLOUD Act (“Your data is also our data”), and we’re looking at a growing divergence between European and American data interests and growing European data sovereignty.
That said, this divergence only becomes meaningful if Europe “seizes the moment” and builds compelling cloud platforms and Artificial Intelligence products of its own – which is quite frankly, rather unlikely to occur on a Silicon Valley scale.