The EU AI Act: Guide for Developers
This year the EU AI Act came into law and has worried a lot of developers. The concern is that large private labs, like OpenAI and Google Deepmind, have captured regulators and will lobby for laws that make it hard for open-source models and startups to compete. There’s also a general concern that regulation will stifle innovation and make it hard for companies to build AI products.
If you’re building an AI product, this post will help you understand if the EU Act will affect you, what you need to do to comply, and what the regulation likely means for the wider tech ecosystem. We’ll also dive into how an LLM Evals platform like Humanloop can help you stay compliant.
As ever, this isn’t legal advice and you should consult your own lawyers before making decisions but I hope that it’s a useful summary of the key points of the act specifically written for AI product teams.
Will you be affected by the EU AI Act?
The good news is that for most developers, the act probably won’t affect you. You’ll mostly be affected if what you’re building is classified as “high risk” (explained below) or if you’re building a foundation model that requires more than 10^25 floating point operations to train.
Anyone who is selling an AI product within the EU has to comply with the EU AI Act, even if you’re not based in the EU. The EU’s definition of “AI” is pretty broad:
“‘AI system’ means a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;”
so it covers all of generative AI as well as a lot of traditional machine learning.
The regulation is primarily based on how risky your use case is rather than what technology you use. The act splits AI applications into 4 possible risk categories: prohibited, high, low and minimal:
- Prohibited AI Applications include systems that manipulate behaviour through subliminal techniques, exploit vulnerabilities, or engage in social scoring and real-time biometric identification in public spaces for law enforcement without strict justifications. Only governments can develop a prohibited application.
- High-Risk AI Systems: These are involved in critical sectors like healthcare, education, and employment, where there's a significant impact on people's safety or fundamental rights. This is the category of application most affected by the regulation.
- Limited Risk AI Systems: These are AI applications like chatbots or content generation. The main requirement here for compliance is transparency. The end-user should know that they’re interacting with AI.
- Minimal Risk AI Applications: This category covers the majority of AI applications such as AI in games, spam filters and recommendation engines.
Most applications should fall into the limited or minimal categories, which are easy to comply with. To determine whether your application is limited risk or minimal risk, ask yourself the following questions:
- Does the AI system interact directly with users, and could these users mistake the AI's outputs for those of a human?
- Does the AI's operation influence user decisions or perceptions in a significant way?
- Is there potential for the AI system to cause harm or inconvenience if users are unaware they are interacting with AI?
If the answer to these questions is "yes," the application likely falls into the limited risk category and you’ll have to make clear to users they’re interacting with AI.
What do you need to do to comply?
The first thing you need to do is determine if your application is prohibited, high risk, or limited risk. There is a helpful compliance checker available here that can help.
If you're limited risk then all you have to do is ensure transparency by clearly informing users that they are interacting with an AI system. This can involve clear labelling or notifications within the user interface of your application to indicate the use of AI, especially in cases where the AI generates content or interacts in a way that could be mistaken for a human.
The compliance burden is much higher if your application is high-risk. Then you have to:
- Develop a Risk Management System: Continuously assess and address risks associated with the AI system.
- Implement Data Governance: Ensure data is relevant, fair, and free of biases.
- Maintain Technical Documentation: Keep comprehensive records demonstrating compliance.
- Automate Record-Keeping: Set up the system to log important events and changes.
- Provide Usage Guidelines: Offer clear instructions for responsible system deployment.
- Enable Human Oversight: Design the system for meaningful human control and intervention.
- Ensure System Integrity: Uphold high standards for accuracy, robustness, and security.
For customers in this category an LLM Evals platform like Humanloop can really help. Humanloop gives you tools to build high quality datasets, evaluate and monitor performance, keep good records of usage data and provide human oversight. This lets you meet many of the requirements of a high-risk application builder. High-risk AI systems have 2 years from the signing of the AI Act to come into compliance.
What about open source model providers?
The EU AI Act creates a separate category for what they consider to be “General Purpose AI” systems. Foundation models and LLMs that are trained through self-supervision on large datasets fall within this category.
There are special requirements on the developers of foundation models. They have to create comprehensive technical documentation detailing their training and testing processes, provide downstream providers with information to ensure an understanding of the model's capabilities and limitations, adhere to the Copyright Directive, and publish detailed summaries of the training content used. Open Source providers only need to do the last two (adhere to copyright and summarise their training data)
Many open-source model providers are already meeting these requirements and so probably won’t have to change their behaviour much. The big exception is for companies that are building models considered to be a “systemic risk”. The EU considers models that have needed more than 10^25 Floating Operations during training to fall in this category. All models in this category will have to perform more stringent model evaluations, have good cyber security, show they are mitigating risks and document any incidents with the EU AI office.
What if you finetune LLMs?
The restrictions on General Purpose AI (GPAI) models apply to the original developers rather than people who are building downstream applications on top of an LLM. You still need to comply with the broader AI act but you’re not considered a producer of a GPAI system and will only need to change what you’re doing if your application is high risk.
How Humanloop can help you be compliant
For high-risk applications like credit scoring, educational applications or employment screening, Humanloop can make compliance a lot easier. Several key features help you meet your obligations under the EU AI act:
Humanloop datasets allow you to implement data governance. You can version and track any changes to data, investigate the data quality and control access.
Humanloop’s logging features help you meet your requirements around automated record keeping and our evaluation tools ensure system integrity and allow you to provide human oversight. All of the actions of your developers whilst creating prompts and flows are recorded as is the data that flows through your system. Our evaluation system ensures that your system performance continues to stay above the thresholds you set.
If you want to find out more about Humanloop, we’d be happy to discuss your use case and you can book a call with me personally here.
What does this mean for the AI ecosystem?
The worst elements of early drafts have mostly been stripped from the EU AI Act. Whilst there is a new compliance burden for high-risk AI systems, much of what's required is already aligned with best practices. The fact that the compliance burden scales with the risk of the use case seems sensible and the majority of AI applications won’t be affected.
The definition of “systemic” general-purpose AI systems as models that require more than 10^25 FLOPs to train seems shortsighted and somewhat arbitrary. The quality of models that can be trained within this compute threshold is improving rapidly, so regulating FLOPs will likely be poorly corralated with model capability. The US executive order on AI has chosen to set the threshold 10x higher at 10^26 FLOPs which also disincentives the largest players from operating in Europe.
Overall the AI Act seems like a reasonable compromise for the first attempt at regulating what will be an enormously impactful technology. The delineation of risk means most developers won’t be affected and I’m hopeful that the arbitrary restrictions on compute will be updated rapidly.
About the author
- 𝕏@RazRazcle