Skip to main content
Loading...
Data & AI in Business Blog Series
EU AI Regulation

4 things you must know about EU AI Regulation

Earlier this year, the European Commission revealed its Draft AI Regulation, which is very likely to evolve into the regulation that will go into effect soon, with major impact on companies and governments developing and/or using AI systems. Is this a big deal? Yes. Should you as a developer or user of an AI system start preparing? Yes. In this piece we provide our thoughts on what this means, why and when it is coming, and the implications.

1. What is the EU Draft AI Regulation?

In April 2021 the EU published its plans towards regulation in the domain of artificial intelligence (AI) and machine learning. In brief, the authors of the plan try to define as general as possible what harm could be done using AI, and how such harmful use of AI can be forbidden by law. At the same time, the authors make very clear that they have no intention to hamper innovation in any way.

So, in short:

  • EU AI Regulation was published and it’s very likely to be adopted and go into effect in the coming years,
  • It has serious consequences for companies and governments developing and/or using AI technologies,
  • It’s a risk-based approach to ensure regulation of high risk AI systems,
  • It aims to regulate/prohibit:
    • AI systems that distort human behaviour through imperceptible techniques that are likely to cause physical or psychological harm,
    • AI systems used for social scoring that leads to unfavourable treatment of humans,
    • AI systems used for real-time biometric identification in public areas for law enforcement purposes.
  • The proposal is lengthy, ambitious, and fundamentally flawed according to experts, well, let’s say it’s work in progress by the EU Commission.

2. Why is it being proposed?

If you produce hammers and nails, a ban on hurting people with hammers will not affect your business, because the intended use is for people to hammer on nails, not on people.

If the hammer is the tool here, and hitting on nails is the activity, you could wonder why the EU AI Regulation is addressing the tool (AI) rather than focusing on the unwanted activities with the tool? The logic may be related to the complexity of AI, combined with its ease of use. Many companies and governments use AI as a black box, not fully understanding its behavior. What if an algorithm is saying that migrants are more likely to commit fraud? Who is accountable for such an algorithm? The training data? The developer of the algorithm? The user? This example is not fictitious, it actually happened in the Netherlands, read the story here.

3. What is the impact on developers and users of AI systems?

AI systems in general have:

  • input,
  • a model that uses the input to prepare an output,
  • output.

The plans of the EU, identify three main undesired uses of AI: manipulation, discrimination, and mass surveillance. The common denominator here is the individual, making it feasible to construct a core principle to protect the individual:

"Any individual who interacts with an AI system should experience no harm from it."

While this principle is terribly vague, the EU AI Regulation goes into much more detail. It attempts to make the principle concrete, such that AI system developers and users can concretely check for compliance, which in most cases is about:

  • Being conscious about the input of the AI model,
  • Being conscious about the output of the AI model.

The EU AI Regulation rightfully puts emphasis on the single largest caveat of AI: bias. AI is by design trained on data, or more generally trained on experiences, which is kind of similar to how the human mind works. If you have never seen a black swan, you may believe that swans are always white as your training sample data was biased with only white swans. However, because you know that many animals appear in many variations, you can imagine swans of all sorts of colours. A well-trained AI algorithm can do the same. In that case, your training sample may still have been biased toward white swans, but your algorithm takes apart colour and species, and keeps an open mind about new combinations of existing colours and species.

Another more harmful example of bias is that today there are fewer women in leadership positions at companies than there are men. This however should never mean that you assume that the next woman you meet, is not capable of being a leader. To tell if someone is a good leader should naturally be about their leadership skills, not their gender. To use an algorithm for this assessment, you need to decide what makes a good leader, use this definition when designing your algorithm, and generally steer clear of labelling individuals and looking for correlations in characteristics of individuals.

4. When will it go into effect?

While it is tough to accurately estimate when the EU AI Regulation will go into effect, we can look at similar regulations and the timelines. GDPR took about 6 years from when it was proposed to the point it went into effect. So if that's an indication, EU AI Regulation may go into effect around 2027.

Bringing it all together

  • EU AI Regulations serves as a welcome health check whether the use of AI results in manipulation, discrimination, and mass surveillance by companies and governments.
  • It addresses bias as the biggest caveat of AI, and you do not need to understand the mathematical workings of an algorithm to assess whether the input data can lead to discrimination.

 

Authors:

Dr. Wessel Valkenburg
Head of Data Science, ZYTLYN Technologies

Houman Goudarzi
CEO, ZYTLYN Technologies