On Friday, 8 December 2023, The European Parliament and The European Council reached a provisional agreement on the Artificial Intelligence Act, commonly referred to as the EU AI Act. It will still take another ~2 years before the Act takes full effect, but it’ll require all European and non-European companies participating in the European market to comply.
It’s difficult to open a business magazine or scroll LinkedIn without seeing #EUAIAct mentioned. It seems like everyone is talking about it, but with all the talk comes lots of unclarity and misinformation.
Since this new piece of legislation directly impacts our business, we took the past 2 weekends to dive into the nitty-gritty details and figure out what all the fuss is about.
The EU AI Act is the first piece of legislation in the world that will regulate AI. Similar to GDPR, the Act aims to protect users' and citizens’ privacy and security. Alexis, our CEO, likes to explain it as “Europe’s logical next step after GDPR.”
Yes. It will affect European citizens, European companies, international companies doing business in Europe and AI tech companies. However, the degree to which you’re impacted will vary depending on how “risky” your AI usage is.
It depends…
You should be concerned about which AI technologies your company uses, what information is fed to the AI model, and how the model processes and stores that information. EU AI Act or not, this is–according to us–a logical approach to responsible business.
But just because your company works with AI, doesn’t mean you’ll have to stop using it tomorrow.
The EU AI Act works with a risk assessment system.
The requirements are still being defined, but in principle, it’ll work with varying degrees from low to medium and high with respective actions. For example, a company with a medium risk may have to provide documentation on how they use AI and how it affects their users. While, a low-risk company, won’t have to change anything in its current AI setup.
-Alexis Safarikas, L'Echo
Before jumping into long compliance discussions with your DPO or legal team, take a moment to assess your risk.
These 4 questions will determine whether the EU AI Act applies to you and how risky your AI usage is:
The EU AI Act applies to every industry except for those working in research or for the military.
The EU AI Act applies to all AI models. However, open source models are generally considered more transparent and accountable than proprietary models. Your model won’t define your risk level, but it can influence it.
According to Article 3 of the AI Act, ‘substantial modification’ means a change to the AI system following its placing on the market or putting into service which affects the compliance of the AI system with the requirements set out in Title III, Chapter 2 of this Regulation or results in a modification to the intended purpose for which the AI system has been assessed.
In short, are you making a large change to the LLM so that it does something completely different than what it was originally designed to do or so that it no longer follows the important rules it was intended to follow?
The specifics still need to be defined by the European Parliament, but if you’re not making a substantial modification, then your AI usage is likely to be low risk.
Depending on the degree of your system modifications, your risk can change to medium or high.
The EU AI Act prohibits companies from using AI to:
If you’re using AI for other purposes, you’ll need to follow regular guidelines for collecting, processing and storing user data under GDPR. If you're concerned about your specific use case, we recommend checking the Future of Life Institute's EU AI Act Compliance Checker.
No. The EU AI Act aims to regulate how companies use AI and ensure that they protect European citizens’ by prioritizing transparency and privacy.
Don't use this legislation as an excuse to avoid experimenting with AI. If risk is your main concern, thenstart with a proof concept using dummy data! That will help you and your team get familiar with the technology and understand what risks may surface in a future project.
With all the misinformation flying around about the EU AI Act, it's best to sit tight and wait until the spring when the European Parliament publishes the final version of the Act that includes all the nitty gritty details about classifying low, medium and high risk usage.
If you're doubting whether something you're reading about the AI Act is true or false, try running it through our EU AI Act Fact Checker. This custom GPT is designed to check your input against the official European Parliament proposal and adopted amendments.
If you want to discuss AI in more detail, then reach out to Alexis.
He's ready to chat in French, English and Greek.