Making AI Regulation Ambitious and Future-Proof

Making AI regulation ambitious and future-proof

Draft of the AI Act must be improved with regard to the classification of high-risk AI systems, says Johannes Kröhnert of the TÜV Association.

The AI Act is a great opportunity for Europe to become a global pioneer in the trustworthy and safe use of artificial intelligence. The goal must be to use the opportunities of AI systems while limiting the associated risks.

Most consumer products are not covered by the AI Act

The risk-based approach envisaged by the EU institutions is correct, however, the classification rules based on it fall short. Only those AI systems are to be classified as “high-risk” where the physical products into which they are integrated are already subject to mandatory testing by independent bodies. This mainly concerns industrial products such as lifts or pressure vessels. However, the majority of consumer products, including toys or smart home devices, do not fall under this testing obligation. This means that most AI-based consumer products are not classified as high-risk products under the AI Act and thus would not have to meet the strict safety requirements. Here we see a major regulatory gap that the EU legislator still has to close in the negotiations.

Risk classification by suppliers can lead to misjudgements

We are equally critical of the classification of AI systems that are not integrated into existing products, but are brought to market as pure software for specific areas of application (stand-alone AI). These include, for example, AI systems for recruitment procedures or creditworthiness checks. According to the proposal of the European Parliament, the providers should carry out the risk assessment themselves and in the end also decide for themselves whether their product is to be classified as a high-risk product or not. This creates the risk of misjudgements. The EU legislator should therefore establish clear and unambiguous classification criteria to ensure the effectiveness of the mandatory requirements.

Mandatory independent audits of high-risk AI systems

There is also a need for improvement in the auditing of AI systems. Here, the EU legislator relies very heavily on the instrument of self-declaration by providers. However, high-risk systems in particular can pose great risks, both to life and health and to the fundamental rights of users (security, privacy) or the environment. Instead of a self-declaration, there is a need for a comprehensive obligation to provide proof, including verification by independent bodies. High-risk AI systems should be subject to mandatory certification by notified bodies. Only independent audits will rule out possible conflicts of interest on the part of the providers. At the same time, people’s trust in the technology is strengthened. According to a recent representative survey by the TÜV Association, 86 percent of Germans are in favour of mandatory testing of the quality and safety of AI systems. ‘AI Made in Europe’ can therefore become a real quality standard and global competitive advantage.

Real labs cannot replace conformity assessment

The set-up of AI real labs (‘regulatory sandboxes’) is a good way to facilitate the development and testing of AI systems, especially for SMEs. The EU Parliament’s call for the mandatory establishment of a real laboratory in one or in cooperation with other EU member states is also to be supported. However, it must be clear that the use of a real laboratory by an AI system alone cannot trigger a presumption of conformity. The provider must still undergo a full conformity assessment procedure before placing its AI system on the market. This applies in particular if an independent body is to be involved on a mandatory basis. Here, the EU legislator should create clarity in the AI Act.

Independent testing organisations should be involved as partners in the development and use of real laboratories. With the ‘TÜV AI Lab’, the TÜV Association has taken on the task of identifying the technical and regulatory requirements for artificial intelligence and accompanying the development of future standards for the testing of safety-critical AI applications. In addition, we have been actively involved for some time in the establishment of interdisciplinary ‘AI Quality & Testing Hubs’ at state and federal level.

ChatGPT & Co. must be co-regulated in the AI Act

The last few months have clearly shown the development potential of basic models and generative AI systems, and the risks they can pose. It is therefore to be welcomed that the EU Parliament wants to co-regulate this technology directly in the AI Act. Generative AI systems must also fulfil basic safety requirements. In a second step, however, it should then also be examined which basic models are to be classified as highly critical. These should then be subject to all the requirements of the AI Act, including independent third-party testing by notified bodies. European standardisation organisations and testing bodies are currently working on developing corresponding norms and testing standards.

Johannes Kröhnert

Johannes Kröhnert

is Head of the Brussels Office of the TÜV Association.