IT Security


Resilience becomes a major requirement

Self-learning systems have the potential to make many processes more efficient and more comfortable and secure for humans, for example in road traffic or work life – on the condition that they are 100% reliable and secured against attack.

The increasing use and development of self-learning systems and methods of Artificial Intelligence must be considered in the context of increasing digital connectivity. As interfaces increase in number so do the number of points vulnerable to cyberattack. Connected self-learning systems – much like classical software applications –– are potential targets for such attacks as they become more and more widespread.

Security is therefore a key issue. Self-learning systems must be robust and resilient to disruptions, targeted attacks or unexpected incidents. Resilience is evolving into one of the key requirements of design. Resilience refers to the ability of the systems to continue performing essential services or tasks even when individual components fail or experience a massive disruption from the outside. This is especially important in security-sensitive fields of application such as energy supply. Self-learning systems can also be used for IT security and privacy protection purposes. Deep learning methods for example help to identify trouble spots in software systems. However, beware the dual use potential: self-learning systems themselves might one day be implemented to launch cyberattacks.

The exact applications in which self-learning systems will become standard depends in large part on the reliability and operational safety of the systems. This is true in particular for systems that interact directly with humans, for example assistance robots in nursing care or driver assistance systems in the field of mobility. The criterion which decides the acceptance level of self-learning systems is that they may not exhibit any undesirable behaviour or cause harm if an event suddenly occurs while they are in use which had not been anticipated in the development stage. It must also be ensured that humans not only set the goals for the systems, but that they are also in or able to regain safe control of the system.

Working Group 3 headed by Mr Jörn Müller-Quade (KIT Karlsruhe) and Mr Eric Hilgendorf (University of Würzburg) focuses on these issues on the Plattform Lernende Systeme.