3 Questions for

Ahmad-Reza Sadeghi

Head of the System Security Lab at the Technical University of Darmstadt and member of Plattform Lernende Systeme

Security in the AI age: "We need to break new ground."

Artificial Intelligence (AI) has made great strides in recent years. Today, AI systems can already be found on social media platforms, in search engines and as recommendation providers in online retail - even in sensitive areas such as medicine or in autonomous driving cars, the self-learning software is involved in decisions. AI can support us in our everyday lives, but if it is maliciously manipulated, it can cause great harm. Prof. Dr. Ahmad-Reza Sadeghi explains the IT security challenges associated with the use of AI and how AI systems and the underlying data can be protected against attacks. He is head of the System Security Lab at  the Technical University of Darmstadt and a member of Plattform Lernende Systeme.

1

Mr. Sadeghi, what new challenges do AI systems pose for IT security?

Ahmad-Reza Sadeghi: Algorithms or AI-based systems are fragile from a security perspective because they are very data-dependent. They can be easily manipulated, especially covertly. The more advanced the systems become, the more advanced the attacks are. The biggest risk is our application behavior: If AI systems one day really automate large parts of our everyday lives and make decisions for us, our dependence on these systems will also be much greater. So potential attackers can also do much greater damage. Another challenge is that common IT security systems cannot be easily applied to AI systems. In addition, you don't want to limit the performance of the models with security measures.

2

How can we protect AI systems and their data from attacks?

Ahmad-Reza Sadeghi: You have to find new ways to secure AI algorithms. In my research, I have also been heavily involved with applied cryptography, i.e. computing with encrypted data. The purely cryptographic solutions are not yet scalable, especially for huge AI models with billions of parameters in some cases. So algorithmic improvements as well as hardware-based solutions for AI security are also currently being researched. Another interesting field is the use of AI for safety-critical systems, i.e. algorithms that protect against attacks.

In terms of data protection, distributed machine learning represents an important opportunity. Here, each end device accesses the respective current model and trains it locally with its own data set. Possible personal data thus does not have to be sent via a central server. Among other things, this increases privacy, for example in a medical context. Hospitals do not share medical data with each other, but can still use distributed machine learning to train the same systems with their data and thus collaborate. However, more points of attack also exist when data and AI models are distributed across more systems. Individual computers could be brought under control, via software or because people in an institution collaborate with the attackers, for example. If this happens, the overall model can be manipulated.

3

What needs to happen to ensure security in the AI age?

Ahmad-Reza Sadeghi: We need to define the notion of security in the AI context more broadly than before. AI decisions have a high reputation and are often seen as neutral and unbiased. However, they often reflect only the data used to train AI systems - and thus human behavior, habits, and biases. This shows that more attention needs to be paid to social factors in the development of AI systems. The impact of AI systems on our society also needs to be studied in more detail. While AI applications for the financial market, in medicine, or in the legal field are obviously recognizable as critical applications that need to be extensively analyzed and reviewed, consequences of other AI applications such as the recommendation algorithms of Facebook, Twitter, and Google can easily be overlooked. These are the echo chambers that are transforming our societies. I don't worry about terminators, but about the insidious impact of social media on democratic countries and their electoral systems. AI holds many opportunities for business and society. But we will only be able to realize its full potential if we develop and deploy the technology in a secure, privacy-protecting, and ethically responsible way.

The interview is released for editorial use (if source is acknowledged © Plattform Lernende Systeme).

Go back