Hostile-to-Life Environments

Mike_Kiev/iStock

Helpful support in dangerous situations

The deep sea, outer space, contaminated environments, crisis zones: Self-learning systems can take on tasks in places that are dangerous, pose unreasonable hardship for humans or are harmful to their health. The assistance systems and robots have varying degrees of automation and autonomy depending on where they are used and what task they must accomplish.

Robots and unmanned systems are already being used to reduce risks for humans – whether it is exploring hostile-to-life environments like the deep sea, gauging toxic gases and radiation in contaminated environments, or in search and rescue operations. For now it is mostly humans who control technology and decision-making processes. On unmanned space missions it is (semi)-autonomous robots that are taking on complex tasks.

Some elements of self-learning systems are still very dependent on executing tasks instructed by humans. In the future they will be able to carry out sensitive tasks together with humans or to navigate a complex, unknown environment autonomously. Research and development on AI applications in hostile-to-life environments sets different priorities – not least because of the diversity of deployment sites. Whereas strategic control in hybrid systems remains in human hands, autonomous robots will be able to explore unknown terrain on their own thanks to Artificial Intelligence (e.g. sensor technology and environment perception) and improved hardware (e.g. batteries and material) and to make their own decisions based on the knowledge they acquire.

The range of possible applications for self-learning systems in hostile-to-life environments must be explored – for example for long-term autonomy, autonomy in unstructured environments or in the development of heterogeneous autonomous systems and hybrid teams. This will generate chances and new business models as well as legal and ethical challenges, in part due to the dual-use potential of these applications.

Working Group 7 headed by Mr Jürgen Beyerer (Fraunhofer IOSB, KIT) and Mr Frank Kirchner (University of Bremen, German Research Center for Artificial Intelligence -DFKI) focuses on these issues on the Plattform Lernende Systeme.