AI regulation: How to design trustworthy systems

Artificial Intelligence (AI) improves processes and enables new business models, but it also harbours risks. To ensure that safe applications are used, the European Commission wants to regulate AI systems according to their risks. Experts from Plattform Lernende Systeme consider this approach necessary, but not sufficient to create trustworthy AI systems. They demand that the concrete application context of a system must be taken into account, as well as complaint bodies and clear liability rules. In a current white paper, they name additional criteria to be able to assess the risk potential of an AI system and show how responsibility for damage should be divided between actors.

Download the white paper (in German)

AI systems can improve medical treatment, contribute to more sustainable mobility and relieve employees in the workplace. However, their use can also be associated with ethical and safety risks, for example if the recommendation of an assistance software leads to discriminatory personnel decisions or an autonomous vehicle causes an accident. The aim of the European Commission's current regulatory proposal is therefore to make the use of AI systems safe and trustworthy without inhibiting innovation. To this end, the EU has classified AI applications according to their potential danger (so-called criticality). For example, systems for the intelligent maintenance of industrial machinery do not pose any risks. They correspond to the lowest of four risk levels and, according to the EU proposal, do not require regulation. Other conceivable AI applications, however, harbour risks and must be regulated - up to and including a ban if their risks are classified as unacceptable, such as social scoring by government agencies.

In the white paper "Criticality of AI systems in their respective application contexts", experts from Plattform Lernende Systeme analyse the proposal for AI regulation presented by the European Commission in April 2021, which is now being discussed in the European Parliament and the Council of Ministers. They flesh out the criteria against which the risks of AI systems can be assessed and stress that AI systems must always be assessed as individual cases and against the background of their respective application context. "The same system can be unproblematic in one context and highly critical in another. AI for detecting hate speech, for example, might initially be considered comparatively unobjectionable. But if the same application is used by a totalitarian state to find and eliminate critical statements, then the assessment turns out to be the opposite," says Jessica Heesen, media ethicist at the Eberhard Karls University of Tübingen and co-leader of the working group "IT Security, Privacy, Legal and Ethical Framework" of Plattform Lernende Systeme.

How critical a system is to be assessed and how strongly it should be regulated is something the EU Commission wants to determine in advance through certain criteria. The white paper recommends taking a closer look at the following questions: whether the recommendations or decisions of an AI system endanger human life or legal assets such as the environment, and how much room for manoeuvre is left to humans in the selection and use of the application, for example to switch off certain functions. According to the authors, the control and decision-making options of the users of AI systems need to be taken more into account when assessing criticality. For example, it makes a difference whether an AI software for stock trading carries out sales automatically or merely makes recommendations to the stock owner to do so.

Their conclusion: The European Commission's approach of regulating AI systems according to their risk potential is a necessary step on the way to trustworthy AI systems. However, it is not sufficient, especially for applications with higher degrees of autonomy. According to Peter Dabrock, ethics professor at the Friedrich Alexander University Erlangen-Nuremberg and member of Plattform Lernende Systeme, there is a danger that the risk classification will create a false sense of security that cannot be guaranteed without accompanying non-technical measures. 

Complaints and liability rules

In general, the risks of AI systems can only be predicted to a limited extent. Unlike the risk assessment of conventional software, AI systems learn independently during their use and change continuously. The authors call for regulation based on risk potential to be supplemented by further mechanisms that take effect during and after the use of the AI system. They propose a consumer protection regime that offers users low-threshold and timely complaint options, for example in the event of discrimination by an AI system. In addition, the responsibility for the risks of AI systems must be clearly divided via liability rules. In the B2B sector, for example, the user should generally be responsible for the results delivered by AI systems in accordance with contract law. For applications in the public sector, the public sector should bear full responsibility for discriminatory or harmful consequences under public law.

About the Whitepaper

The white paper "Criticality of AI systems in their respective application contexts. A necessary but not sufficient building block for trustworthiness" was written by experts of the working group "IT Security, Privacy, Legal and Ethical Framework" and the working group "Technological Enablers and Data Science" of Plattform Lernende Systeme. It is available for download free of charge.

Further information:

Linda Treugut / Birgit Obermeier
Press and Public Relations

Lernende Systeme – Germany's Platform for Artificial Intelligence
Managing Office | c/o acatech
Karolinenplatz 4 | 80333 Munich

T.: +49 89/52 03 09-54 /-51
M.: +49 172/144 58-47 /-39
presse@plattform-lernende-systeme.de

Go back