3 Questions for

Detlef Houdeau

Senior Director Business Development in the Digital Security Solution department at Infineon Technologies AG and member of the IT Security, Privacy, Legal and Ethical Framework working group of Plattform Lernende Systeme

3 Questions for Detlef Houdeau

From a helpful to a criminally used tool: Protecting AI from misuse

Artificial Intelligence (AI) is a helpful tool that can make people's work and everyday lives easier in a variety of ways. At the same time, AI systems, which are often highly networked and partly embedded in other systems, can also be used for illegal purposes. Protecting them from misuse is an important task today and in the future. Using the example of digital image processing, Detlef Houdeau outlines how AI algorithms help professional and amateur photographers take better pictures - but can also be used with criminal intent to forge travel documents. Detlef Houdeau is Senior Director Business Development in the Digital Security Solution department at Infineon AG and a member of the IT Security, Privacy, Legal and Ethical Framework working group of Plattform Lernende Systeme.


Mr. Houdeau, when do you talk about misuse of an AI system?

Detlef Houdeau: An AI algorithm is usually developed for a specific field of application and trained and qualified for it before it is used or applied. If the AI algorithm is used in a different context, it is a misappropriation. If, in addition, fundamental values of persons or material values are violated in the process, this constitutes misuse of the AI system. This then affects both the mathematical model on which the AI algorithm is based and the training data. The latter are essentially part of the "learned" and thus the "knowledge" - which is what makes an AI system valuable in the first place. Classical IT systems usually manage without training data, so that the threat in the event of misuse can be classified as lower.



How do potential attackers proceed?

Detlef Houdeau: Let me outline this with an example. For some years now, professional photo studios have been using digital image processing software to enhance good digital photos into excellent photos. Numerous digital corrections can be made specifically with it, such as improving brightness, sharpness, color, angle as well as the selected image section. AI algorithms are also used for this purpose in a (partially) automated manner. Image processing software is now widely available - in the form of commercial software packages, open source software and online services. With the latter, the digital photo is processed in a web application and the result is immediately delivered digitally to the customer. Photo studios also use image editing software for facial shots, such as at weddings, or portrait photos for resumes. Some smartphones also already have image editing software.

Misuse of image editing software occurs when the facial images of several people are "digitally mixed" for illegal purposes. This can be done for the purpose of identity deception. The procedure is known as morphing among experts and is already very powerful. For example, it has been scientifically demonstrated that facial photos of up to seven people can be successfully digitally morphed. A traveler, equipped with a passport containing a computer-manipulated portrait photo, could pass through the checkpoint undisturbed at an automated border control, such as at an airport. The digital discrepancies between the traveler's face photo taken by the camera and the stored digital image in the micro-chip in the passport are too small to detect the manipulation. Morphing is therefore punishable in the case of travel documents such as biometric passports or electronic residence permits for third-country nationals in the EU area.

To what extent morphing technologies are in use in countries from which mainly illegal entrants come to the Schengen area seems to be little known. Moreover, it is often questionable whether these technologies and applications are punishable in certain countries.


How can the misuse of AI systems be prevented?

Detlef Houdeau: Let's stay with the example of misused image manipulation software for the purpose of identity deception: With the knowledge of how morphing technologies can manipulate digital facial photos, corresponding detection software can also be developed. In order to detect image manipulations, image point analyses, so-called pixels, can preferably be analyzed and evaluated both on the face (e.g., at the pupils), but also on the background using a computer with sufficiently high resolution. Research institutions as well as companies specialized in biometrics have been working for several years to develop AI-based software to detect morphing attacks. The goal of so-called morphing attack detection (MAD) is to achieve high hit rates.

However, since AI algorithms for professional image processing are constantly being developed further, it can be assumed that possible morphing attacks are also constantly improving in quality. Consequently, AI-based detection software must also constantly keep up with - or, at best, outpace - the state of the art. This threat of identity deception has the potential to become an ongoing issue for law enforcement agencies. Banning commercial imaging software seems unrealistic, especially since it would have to be applied globally to be effective.

Identity deception in Germany is made more difficult by measures such as the taking of facial photographs at registration offices or the electronic transmission of digitally signed photographs from authorized photographers to registration offices. However, this cannot represent a holistic answer.


Further information on the topic is provided in the white paper "Protecting AI systems, preventing misuse - measures and scenarios in five application areas"(in German) of Plattform Lernende Systeme, as well as an interactive presentation of possible attack scenarios (in German).

Das Interview ist für eine redaktionelle Verwendung freigegeben (bei Nennung der Quelle © Plattform Lernende Systeme).

Go back