Artificial Intelligence: How to prevent its misuse

Artificial Intelligence (AI) already supports people in their everyday lives and improves processes in companies and public authorities. However, self-learning systems can also be used contrary to their actual purpose and harm society and the economy. A current white paper from Plattform Lernende Systeme shows how AI systems can be protected. The experts use realistic application scenarios to illustrate possible challenges and name concrete precautions to prevent misuse. Their conclusion: From the development to the disposal of an AI system, potential gateways for a malicious application can and must be closed. In addition to technical and organizational precautions, this also includes a responsible approach to AI in society.

Download an Executive Summary

Autonomous vehicles can safely move people through traffic - or, repurposed as a weapon, be directed into a crowd. AI-controlled drones can deliver packages - or deliver drugs. Voice models can improve customer service - or compose deceptively real phishing emails to smuggle malware into a business. The beneficial uses of AI are as varied as the abusive use cases. The authors of the white paper "Protecting AI systems, preventing misuse" describe the misuse of AI systems as "misappropriation with negative consequences". Behind this, they say, there is always human intent on the part of different actors with different motives. The experts emphasize the particular scope of manipulation of AI systems compared with other technical systems: Thus, the actions of humans and machines can be influenced when AI systems are used to make decisions.

"The fact is that AI systems can always be misused by criminals, state organizations or economic competitors for dishonest purposes - be it to conduct espionage, spread false information or monitor people. "We therefore have to look at possible vulnerabilities from the very beginning, from design to maintenance," says Jürgen Beyerer, director of the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB and head of the Hostile-to-Life Environments Working Group of Plattform Lernende Systeme. "We have to put ourselves in the shoes of a possible perpetrator and consider which attack scenarios are conceivable in a specific application. Technical protection mechanisms must be integrated for this, but organizational precautions must also be taken."

Not only the AI system itself, but also its data and learning processes must be protected. Both classic cybersecurity devices and AI-supported tools such as anomaly or identity detection are used for the technical protection of AI systems.

"Misuse" in this context does not necessarily mean that the AI system is hacked, but that it is used, as is, for an unintended, malicious purpose. For example, an autonomous car could be misused for attacks, or an AI system that recognizes toxins for safety reasons could be used to develop novel, even more toxic substances," explains Jörn Müller-Quade, professor of cryptography at the Karlsruhe Institute of Technology (KIT) and head of the IT Security, Privacy, Legal and Ethical Framework working group of Plattform Lernende Systeme. "Therefore, precautions must already be taken in the development of AI systems that at best detect and prevent such criminal use, but at least make it significantly more difficult."

The white paper's authors emphasize that safeguards should also consider the AI system's environment, as well as the people who develop, use or control it. Human misconduct can lead to risks at any stage of an AI system's lifecycle, they say. Clear processes and rules, for example on how to handle AI in the company, make it more difficult to misuse it. According to the white paper, it is necessary to strengthen knowledge of AI in society and to promote discussion of the weak points of AI. In addition, the experts recommend that selected AI systems be regularly checked for weak points by independent third-party bodies - even after they have been approved - and that responsibilities and liability in the event of misuse be clarified at the European level.

About the white paper

The white paper Protecting AI Systems, Preventing Misuse (Executive Summary) was written in a lead role by experts from the working groups Hostile-to-Life Environments and IT Security, Privacy, Legal and Ethical Framework of Plattform Lernende Systeme. Members of other working groups were involved.

Additional information

Illustrative scenarios (in German) on the Plattform Lernende Systeme website show the protective measures that can be taken to prevent misuse. Graphics for the scenarios are available for download (in the "Application scenarios" section). In a short interview, co-author Detlef Houdeau, Senior Director Business Development at Infineon, explains how criminals proceed when misusing an AI system.

Further information:

Linda Treugut / Birgit Obermeier
Press and Public Relations

Lernende Systeme – Germany's Platform for Artificial Intelligence
Managing Office | c/o acatech
Karolinenplatz 4 | 80333 Munich

T.: +49 89/52 03 09-54 /-51
M.: +49 172/144 58-47 /-39
presse@plattform-lernende-systeme.de

Go back