What Does the New AI law Being Prepared by the European Union Look Like?

The European Parliament is preparing the world’s first comprehensive law on AI. What will this pioneering regulation look like, what ‘red lines’ does it draw, and what benefits will it bring?

The impressive development of AI over the last few years is giving a lot to talk about. Its evolution is so rapid that many experts are warning about the need to pause its development for a few months to analyse its risks and rethink where we want to go.

In Silicon.es we already said that the development of explainable, ethical, and responsible AI would be one of the great technological challenges of the coming years. In that report, we reported on some of the initiatives that have arisen to try to regulate the advances of this technology, both by private organisations and public institutions. Among them, we talked about the European Commission’s Regulation on the legal framework applicable to AI systems.

Not surprisingly, the European Union has always been very cautious and has been taking small steps in this direction for some time. For example, almost five years ago we reported on the publication of the first draft of ethical principles to be considered in the development of reliable AI.

Three years ago, we also reported on the European Union’s misgivings about the use of facial recognition in public spaces and analysed the possible impact it could have on the development of this industry.

All these documents have been shaping the corpus on which the EU’s new AI law is based. “The European Parliament is aware of the economic and social benefits that the use of AI will bring in all sectors, but it is also concerned about the risks posed by these new technologies, especially for human rights and fundamental freedoms, in particular with regard to discrimination, data protection, and citizens’ privacy,” says Enrique Puertas, Professor of AI and Big Data at the European University (EU).

Principles of the new law

The law being prepared by the European Parliament is structured around six principles. “The law aims to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. It also advocates human supervision of AI systems to avoid causing harm,” details Maite López-Sánchez, professor in AI at the University of Barcelona (UB).

We look at them in detail:

Safe systems. “It is necessary to apply the precautionary principle with disruptive technologies such as AI, which can be beneficial, but the associated risks must be taken into account, appropriate security restrictions must be established and the privacy of individuals and cybersecurity must be guaranteed,” explains Jordi Ferrer, professor at EAE Business School and lawyer specialising in Digital Law.

Ensuring transparency. “Transparency allows inadequate practices to be corrected, especially in data collection processes and systems training. It is necessary to apply understandable and transparent information policies,” explains Ferrer.

System traceability. The EAE Business School professor points out that “it must be guaranteed that we understand and know how the system evolves, so that if necessary we can trace and investigate how it works”.

Guarantee of non-discrimination. “The systems must avoid unfair prejudices, as they could have multiple negative implications, from the marginalisation of vulnerable, racial groups to the exacerbation of prejudice and discrimination,” he warns.

Respect for the environment. Ferrer points out that these are “energy-intensive systems that have to be weighed against the current situation of sustainability guarantees”. We also recently published a report on this issue.

Supervised by people. The EAE Business School expert stresses that “automation from start to finish is not acceptable, as it can generate harmful results”, so he insists that “human supervision must be applicable at some point in the process”.

Finally, Ferrer specifies that “the regulation aims to establish a uniform and technologically neutral definition of AI and to allow for the evolution of systems that advance with its application”.

Red lines’ for unacceptable risks

One of the novelties of the law is the establishment of various obligations for technology providers based on an assessment of the level of AI risk. “This risk analysis is already applied in personal data processing systems, as a result of European privacy regulations,” explains the EAE Business School professor.

Thus, a categorisation of the risk level of AI models is set. “Four levels of risk are defined: unacceptable, high, limited, and low or minimal. The level of risk is set according to the types of data used and the purpose for which the AI models are used,” says Puertas.

The highest level of risk is associated with systems that may pose a threat to people. The EU sets a ‘red line’ here and considers them unacceptable, so they will be banned.

Ferrer gives some examples. One of them would be systems that involve cognitive manipulation of the behaviour of specific vulnerable people or groups such as children. “The example would be AI and voice-controlled toys that may pose risks to minors.

He also talks about social scoring systems. “AI would rank people on the basis of behaviour, personal characteristics, and so on. Such systems are in operation in China, for example”. And the law also talks about real-time biometric identification systems, referring to facial recognition.

“The second level of risk corresponds to systems that negatively impact the security or fundamental rights of individuals, such as medical devices, aviation, education, employment or interpretation of the law, among others. These systems will have to be assessed throughout their entire lifecycle,” explains Lopez. With regard to high-risk systems, she specifies that “although they are not banned, they are evaluated and monitored closely”.

The UB professor points out that “systems with a limited level of risk are also identified, for which only transparency mechanisms are required to enable informed decision-making”.

Focus on generative AI

Although the law began to be drafted before the emergence of ChatGPT and generative AI, the EU has been quick to include some points on this issue.

“In the last version that was voted on, in May 2023, the concept of generative AI was introduced at the last minute. Many of the problems that could arise with generative AI, such as fake news, deep fakes, impersonation, etc., are covered by the new law, as it is designed with the purpose of use in mind rather than the specific AI technology underneath. Therefore, many of these situations would be covered and regulated,” says the EU professor.

Ferrer believes that the solution to possible problems that could be generated by generative AI would be “to comply faithfully with the principle of transparency that will be included in the regulation”.

This is specified as follows: “The system must be obliged to report that the content has been generated by AI. In addition, it should be trained and designed in such a way that it does not generate illegal content and content that could violate regulations. For example, it should not violate privacy. Finally, it should provide transparency and publish summaries of the copyright-protected data used to train the system,” says the EAE Business School expert.

On the other hand, Puertas warns that “there are other aspects that remain ‘lame’, such as those related to intellectual property, as the May proposal leaves many aspects that affect generative AI up in the air”.

Benefits, but also limitations

The new regulation will bring benefits for tech companies. “The most immediate benefit is that it sets the ‘rules of the game’ on the use of AI. At the moment, we are in a situation where we have a data protection regulation, the GDPR, which does not cover all aspects related to the development of AI models, which is generating a lot of uncertainty and slowing down the development of projects due to a lack of regulatory certainty,” says Puertas.

The law will also have a positive impact on citizens. “Having a specific regulation should guarantee that fundamental rights are being respected when AI algorithms are applied to make decisions that may impact our lives,” he says.

However, the limitations set by European law could also negatively affect the innovation and competitiveness of European tech companies, compared to companies based in other countries.

“AI needs data. It feeds on it and needs it to train algorithms to be efficient. If companies are hampered in using data, the development of AI models is going to be very limited in Europe. We are already seeing symptoms of this problem, which may increase over time,” says the EU expert.

“For example, we see how some of the most popular AI technologies, the text-based image generation technologies, having all been developed in the US, only work for English text input. They do not work for German, French, Spanish or other EU languages. Another example is the recent launch of the social network Threads, which has gained more than 100 million users in a short period of time, but very few are EU citizens, as the company behind it, Meta, decided not to launch it initially in Europe because it considers EU data protection policies to be too strict,” he says.

“If a balance is not struck that guarantees the privacy of citizens and at the same time allows data to be used to train algorithms, we will start to see companies and institutions in the United States and China drift away from European companies and institutions in terms of competitiveness and innovation. And that gap could be a very serious problem for the European Union.

In fact, he recalls that “the European Union has always been one of the regions with the most guarantees in terms of transparency and privacy of its citizens’ data”, while “other regions have been more lax with regard to the type of data that companies… or the state can collect”.

For this reason, he believes that the rules that will regulate the use of AI in other countries in the future could point in a different direction, prioritising “development and innovation over the privacy of citizens”.

However, López-Sánchez believes that this is a “necessary drawback”. “In the same way that security systems are implemented in machinery or industrial processes, we need to protect ourselves from potential harm from AI,” he notes.

It is a similar strategy to data protection law,” he says. “Although it forces European companies to make an effort, it also places restrictions on companies operating in Europe,” he says.

Ferrer shares this view. “The experience we have with the GDPR leads me to believe that technology companies located outside Europe will have to comply with the regulation and thus avoid a negative effect on the business competitiveness of companies located in Europe.