Putting the reins on AI

Artificial intelligence is playing an increasingly important role in our lives - and not just since the hype surrounding ChatGPT. But alongside many opportunities, AI also harbors various risks. With the Artificial Intelligence Act (AIA), the EU wants to regulate artificial intelligence in the future. In December 2023, the European Parliament and EU member states agreed on a provisional version. The AI regulation is then due to be adopted at the beginning of this year. 
TÜVIT expert Vasilios Danos explains what will change with the AI Act.

Mr. Danos, how will the AI Act regulate artificial intelligence in the EU in future, in other words, what changes will it bring?

Vasilios Danos: For citizens, the AI Act primarily brings more transparency about where artificial intelligence is used and the certainty that their rights will not be violated by it. In future, companies, providers and authorities that use, market or develop AI applications will have to deal with the risks and effects these applications can have and how these risks can be minimized. What needs to be done depends on the risk class that your AI application falls into.

 

The AI Act distinguishes between four risk classes. What exactly are they?

The lowest level is the "low risk" category. This includes spam filters or AI avatars in video games, for example. These are applications where there is no risk of physically endangering people, violating their rights or causing them financial harm. These applications are largely exempt from all requirements. The next level is called "limited risk". This includes, for example, simple chatbots, i.e. AI systems that interact with users. In future, such applications must make it clear to users that they are dealing with an AI and not a human. Deepfakes - manipulated videos of celebrities, for example - and other AI-generated content must also be labeled as such. The third level is the "high risk" category. This includes biometric access systems that use facial recognition, for example, or AI applications that automatically screen applications. The possibility of applicants being discriminated against and excluded because of their name, for example, must be ruled out. However, an AI-controlled industrial robot, which can injure people in the event of a fault, and certain critical infrastructures such as telecommunications, water and electricity supplies also fall into this high-risk area. The AI Act imposes a correspondingly large number of strict requirements for this risk level.

 

And what is behind the fourth and final risk category?

This is the "unacceptable risk" category. These are AI systems that are considered a clear threat to fundamental rights, such as those that automatically analyze our behavior or are used for manipulation purposes. These AI applications are therefore generally prohibited under the AI Act. These include, for example, emotion recognition systems in the workplace or so-called social scoring systems, which are used in other countries such as China to evaluate and monitor people on a massive scale using AI.

 

Will ChatGPT or other AI-based chatbots also be covered by the AI Act?

When ChatGPT was published at the end of 2022, deliberations on the AI Act were already relatively advanced. The EU reacted to this development and created a separate category for these so-called basic models. In other words, for AI models that can be used for various purposes (also known as "general purpose AI" or GPAI for short) and therefore have a major impact on many people. According to the AI Act, the providers of these models will have to implement far-reaching transparency requirements in future, such as disclosing the origin of the training data and, if necessary, having their security checked by independent third parties.

 

When will the AI Act come into force?

If everything goes according to plan and the final wording is completed, the AI Act will be passed this spring. There will then be a transitional period during which the regulation will be incorporated into national legislation. The bans on unauthorized AI are expected to take effect after six months, the requirements for ChatGPT and other basic models after one year. A transitional period of two years is planned for the other risk classes.

 

How should companies prepare for the new requirements?

Basically, companies will not receive any mail from an authority that classifies their AI application in a risk class. Instead, they are obliged to classify themselves correctly and take appropriate measures. And they must do so before the AI application is launched on the market or used. If an AI system falls into the high-risk category, for example, extensive requirements must be met; the robustness and security of the systems may also have to be tested by independent third parties such as TÜVIT. Companies must also implement an AI (risk) management system that covers the application throughout its entire life cycle: How was the AI system developed, what was the quality of the training data, what risks does it entail, how was it validated and tested? ISO standards have already been published for certain aspects, such as an AI management system. (Testing) standards for other aspects such as security or transparency are in progress. We at TÜV NORD are also represented on the relevant committees, where the general requirements of the AI Act are being specified for certain fields of application. Different circumstances and challenges apply to AI applications in mechanical engineering, the medical sector or telecommunications, which must be reflected accordingly in the tests.

 

And what are the penalties for breaches of the requirements?

Violations are punished with severe fines. Prohibited AI applications can be fined up to 35 million euros or seven percent of annual global turnover. Other violations can cost up to 15 million euros or three percent of annual turnover. Correct self-assessment and implementation of the necessary measures are therefore essential for companies. However, achieving this within the set deadlines without external support is unlikely to be manageable for many. Especially as the testing of AI, which my team and I have been working on for several years, is often uncharted territory for companies. Unlike with conventional software, there are still no established testing tools or best practices that you can simply fall back on.

 

Does the AI Act allow a comprehensive control of AI, or are there still gaps that should be closed in the future?

As things stand, the AI Act should indeed cover all widespread AI applications. However, the field of artificial intelligence is developing rapidly. Many AI researchers did not expect the performance of ChatGPT for another 30 years or so. In the future, new AI technologies could also be developed that are not yet covered by the AI Act. We therefore need to be alert and quick to keep pace with the rapid development and to capture it in regulatory terms.


Read in German