Certification of AI: Plattform Lernende Systeme names test criteria

Artificial Intelligence (AI) is already used in many industries. To exploit its economic and social potential, it is essential that people trust AI systems and the processes and decisions associated with them. Certification of AI systems can help to increase trust in the technology. It is important to choose the right level of certification in order to maintain the innovative strength of Germany as a business location. In a current white paper, Plattform Lernende Systeme shows when and according to which criteria AI systems should be certified and how an effective testing infrastructure can be designed.

Download the white paper (executive summary)

Certification is one way to ensure the quality of an AI system. It is a confirmation by independent third parties, usually for a limited period of time, that specified standards, norms or guidelines have been met. The following applies: Not every AI application has to be certified. While the majority of AI systems should be unproblematic, such as algorithms for identifying spam, there are applications that should be subjected to closer scrutiny, the paper states. The authors suggest that when deciding whether a system should be certified, the so-called criticality of the system should be assessed. The question to be asked is whether a system endangers human life or legal assets such as the environment, and how much room for manoeuvre is left for manoeuvre in the selection and use of the application, for example to switch off certain functions. The criticality is always determined by the context of application. The same system can be unproblematic in one application context and highly critical in another. For example, a hoover robot, despite its high degree of autonomy, could initially be considered comparatively unproblematic, but if it collects data that it provides to its manufacturer, the evaluation can be more critical.

Overregulation should be avoided

"Certification can help a large number of AI systems to exploit their potential for social benefit in a safe and public-interest oriented way. In order for this to happen in accordance with socially recognised values, a form of certification must be found that is guided by important ethical principles, but at the same time also fulfils economic principles, avoids over-regulation and promotes innovation. In the best case, certification itself can become the trigger for new developments for a European path in AI application," says Jessica Heesen, head of the research area Media Ethics and Information Technology at the International Centre for Ethics in the Sciences and Humanities (IZEW) at the University of Tübingen and co-head of the working group IT Security, Privacy, Legal and Ethical Framework. In the case of a higher degree of criticality, such as an AI used to distribute study places, the authors recommend that the state should be used as a regulatory body. For particularly critical application contexts, such as biometric remote identification, the state could issue prohibitions or restrictions on use.

The test criteria to be applied are divided by the authors into minimum criteria, which must always be met, and voluntary criteria that go beyond these criteria. The minimum criteria included transparency, security, freedom from discrimination or protection of privacy. Other criteria mentioned in the white paper include user-friendliness and sustainability.

Certification should be carried out before the AI system is used in practice. However, learning AI systems in particular continue to develop after commissioning, which may make regular re-certification necessary. "Existing certification systems are often too slow. As a result, IT systems are sometimes not further developed because the associated re-certification is too costly. But learning AI systems are constantly changing, not only when updates are made. A good certificate for AI must take this dynamic into account and maintain its validity regardless of technological progress," says Jörn Müller-Quade, Professor of Cryptography and Security at the Karlsruhe Institute of Technology (KIT) and co-leader of the IT Security, Privacy, Legal and Ethical Framework working group at Plattform Lernende Systeme.

The first step towards a certification scheme for AI is to define the thresholds above which criticality is required for certification, says the white paper. This would require a social debate. The authors recommend involving citizens in certification processes and imparting knowledge about the functioning of AI systems already at school.

About the White Paper

The white paper Certification of AI Systems. Compass for the development and application of trustworthy AI systems (in German) was written under the leadership of the IT Security, Privacy, Legal and Ethical Framework working group and the Technological Enablers and Data Science working group. Members of all working groups of Plattform Lernende Systeme well as guest authors were also involved. The executive summary (in English) can be found here.  

Further information:

Linda Treugut / Birgit Obermeier
Press and Public Relations

Lernende Systeme – Germany's Platform for Artificial Intelligence
Managing Office | c/o acatech
Karolinenplatz 4 | 80333 Munich

T.: +49 89/52 03 09-54 /-51
M.: +49 172/144 58-47 /-39
presse@plattform-lernende-systeme.de

Zurück