3 Questions for

Jürgen Beyerer

Professor of Computer Science at the Karlsruhe Institute of Technology (KIT) and Director of Fraunhofer IOSB / Member of Plattform Lernende Systeme

3 Questions for Jürgen Beyerer

AI in hazardous environments: Only as much human intervention as necessary

Self-learning robots and AI systems can provide support in space, in the deep sea or in the event of disasters - by exploring unexplored terrain or carrying out work in dangerous regions. To do this, the technical systems must act as autonomously as possible, but at the same time always remain under the ultimate control of humans. Jürgen Beyerer explains in this interview how these requirements can be implemented and where there is still a need for research. He is a professor of computer science at the Karlsruhe Institute of Technology (KIT) and director of the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB. At Plattform Lernende Systeme, he heads the "Hostile-to-life Environments” working group.

1

Mr Beyerer, what distinguishes the use of self-learning AI-based systems in hostile environments from other fields of application?

Jürgen Beyerer: Hostile environments are characterized by conditions that particularly stress or endanger humans and animals and do not correspond to their natural habitats. Such environments are usually not only hostile for humans, but also for technology - this is the first difference from other fields of application of learning systems. To operate successfully in such extreme or challenging conditions, AI-based systems need appropriately adapted and reliable hardware. Incidentally, this also entails increased dangers if such systems are used for purposes other than those for which they are intended or against humans - this can also be mentioned as a difference.

AI systems for hostile environments are particularly concerned with replacing or assisting humans to minimize their risk of danger. For other AI systems, minimizing hazards is not always the primary concern. Another difference is that the conditions for learning in hostile environments are usually very difficult because situations in many operations are not well known and are very dynamic. As an example, one can mention rescue or recovery operations. However, "classical" machine learning is based on the analysis of large amounts of similar data that cannot be collected from such missions. Therefore, new learning algorithms have to be developed to learn from few data.

In remote operations, where learning systems have to operate autonomously for months or even years (e.g., in underwater or space research), another difference becomes clear: It is not possible to predict in which direction and to what extent the experience that a self-learning system gains in a continuous operation will change its characteristics. The "classical" learning algorithms are good for the data that are pre-processed and prepared by humans - with them learning is basically controllable. In the case of continuous autonomy, on the other hand, such data are not available, and technologies for automatic preparation of complex learning data are still being researched.

Another difference concerns maintenance and repair: humans can rarely take on such work in hostile environments, so remote systems must be able to handle functional and subsystem failures.

2

How autonomous should and may AI technologies act in hostile environments?

Jürgen Beyerer: That depends on the environment and the mission (or the operational goals) of the acting technical system. Technologies cannot act, by the way - the systems that use corresponding technologies do. Degrees of autonomy have been discussed in research on autonomous systems since at least the 1970s. Different models and classifications exist. From direct control by humans to complete autonomy, several intermediate levels can be distinguished in which the human cedes less or more control to the learning system.

There are also hostile environments where only technical systems can act autonomously because a human cannot stay there and communication with him is not possible (e.g. at greater depths under water) or has such delays that make timely human reactions to situation changes at the system's place of operation impossible (e.g. extraterrestrial planetary exploration). In hostile environments where human responses or instructions are possible "in real time," the goal is usually to replace or assist humans to minimize their risk of danger. Therefore, the rule is: as much autonomy as possible - only as much human intervention as necessary.

In the course of each deployment, constant competence analysis of an AI-based system is necessary to determine safe and situationally possible levels of autonomy. Competencies of a learning system with respect to the ability to solve encountered problems may also change during the course of the deployment, and especially the quantitative detection of the changes is a problem still to be researched. Lacking or insufficient competence always means increased human intervention up to teleoperation (= remote control).

In conclusion, it has to be said that humans will continue to be irreplaceable as task forces and decision-makers in the future - especially in missions to save human lives. We discussed this in detail in a recent white paper from Plattform Lernende Systeme.

3

What are the resulting premises for technology development?

In hostile environments, autonomously acting learning systems may encounter ethically or legally problematic situations during a mission in which they are not allowed to make decisions and are dependent on human support (keyword: dilemma situations). Therefore, such systems also need components that can recognize and communicate such situations to ensure the appropriate level of autonomy. These components must be developed.

New technologies have to be developed for autonomous learning in long-term (autonomous) systems as well as for learning based on sparse data in one-time situations (keyword: incremental learning). Transferring what has been learned and generalizing learning (inductive learning) to non-identical systems is also an important research topic. This requires comprehensive data and information collections as well as simulation and test environments in which learning systems can learn - if necessary together with humans (keyword: immersive learning environments).

In addition, AI-based systems should continuously analyze their own competencies and capabilities to identify situations in which they need human assistance, as well as the type of assistance needed - from confirming a decision of an autonomous system to teleoperation. Even an experienced system operator supervising the work of an autonomous system is not always able to recognize and intervene in such (usually very complex) situations in time. In particular, in the case of learning systems, it is difficult for him to assess what changes in competence have resulted from the learning. And we want (and need!) to equip AI with such capabilities. Enabling Learning Systems to perform a comprehensive dynamic self-competence analysis would be a central task of research.

Clear legal requirements as well as uniform standards describing the framework conditions for the use of learning systems are crucial prerequisites for the widespread use of such systems - and not only in hostile environments. In any case, humans must always be able to take control of an autonomous system if they deem it necessary.

The whitepaper "Competent in Use - Variable Autonomy of Learning Systems in Hostile Environments" of Plattform Lernende Systeme is available for download here.

Go back