3 Questions for

Holger Hanselka

Steering Committee Member of Plattform Lernende Systeme and President of the Karlsruhe Institute of Technology (KIT)

Holger Hanselka ©Andrea Fabry

Safer Internet Day: AI - Friend or foe?

Phishing mails that elicit passwords, false online requests to disclose bank data or fake news - Safer Internet Day raises awareness of the threats on the network and IT systems. The hacker attack on the Bundestag at the end of last year and the "Wanna Cry" cyber attack on Deutsche Bahn in May 2017 are just two examples of attacks on the IT systems of government and business. In 2017, the damage caused by industrial espionage, sabotage or data theft in Germany amounted to 55 billion euros, according to a study by Bitkom. With the increasing networking of companies and public institutions in the course of digitalization, their potential vulnerability to cyber attacks is growing. And the rapid advances in the field of Artificial Intelligence (AI) and Machine Learning are creating a new dynamic in IT security. Holger Hanselka, Steering Committee Member of Plattform Lernende Systeme and President of the Karlsruhe Institute of Technology (KIT), explains in an interview how AI can improve IT security and where it opens up entry points for new threats.


How can Artificial Intelligence make Internet applications more secure?

Holger Hanselka: Generally speaking, protection only at the external borders of a complex IT system is not enough. We must also be able to react if a part of the IT system has been taken over by an attacker. To do this, we need reliable attack detection, and this is where AI systems will be able to show their great potential and substantially increase security. It is also possible to harden IT systems in advance against attackers by letting the system be attacked by AI systems and thus discover weak points before it goes into use. But we must be clear: AI, like many technologies, has a dual-use character. On the one hand, the application of AI can "harden" IT systems. On the other hand, attacking with AI can lead to a new race between attackers and defenders.


So do new dangers threaten if cyber criminals also use Artificial Intelligence?

Holger Hanselka: Exactly, because of course the "enemies" also use AI and there will be two effects. AI systems can uncover completely new vulnerabilities and detect attacks. At the same time, elaborate attacks, which so far only human experts can carry out, will be automated in the future and thus occur in much greater numbers. I am thinking here above all of social engineering. This is about using clever deception to induce people to reveal their bank data, for example. AI systems can also automate customized phishing e-mails, as well as pretend to be people you urgently need to give the password to in real phone calls. But social engineering, in a broader sense, will also include targeted influencing by half-truths or fake news. Here it is particularly worrying that AI systems can automatically falsify video and audio files, but people find what they see or hear very credible.


What challenges need to be overcome in order to exploit the full potential of Artificial Intelligence for IT security?

Holger Hanselka: When it comes to IT security, we want to provide reliable guarantees. Therefore, we can't just try IT security because you can't anticipate the intentions and plan of an intelligent attacker. One of the problems with using AI is that we do not yet understand why an AI does this or that. Here we urgently need further research and progress on this issue before AI systems can be relied upon in critical decisions. A combination of classical algorithms and AI could be a way where the algorithms check the proposals of the AI. Another possibility would be AI systems that not only output decisions, but also give the reason for the decision.

Go back