3 Questions for

Volker Tresp

Professor of Machine Learning at Ludwig-Maximilians-Universität Munich and member of the Technological Enablers and Data Science working group of Plattform Lernende Systeme

3 questions for Volker Tresp

Simply explained: What is and can Artificial Intelligence do?

Artificial Intelligence (AI) is considered a key digital technology of the future. AI-based computer systems can control traffic, optimize production processes and support doctors in diagnosis. We have long been using AI systems in our everyday lives via smartphones. But what exactly is AI and how does it work? What distinguishes weak AI from strong AI? And what challenges are associated with the use of self-learning computer programs? Volker Tresp answers these questions in an interview. He is a professor at Ludwig-Maximilians-Universität Munich with a research focus on machine learning in information networks and a member of Plattform Lernende Systeme.



What exactly is AI - and how does it learn?

Volker Tresp: There is no generally accepted definition of Artificial Intelligence. We generally speak of AI when a computer system can take on a task that requires intelligence from humans - and for which not every step had to be programmed in advance. One example is sorting mail: In the past, a human would read the handwritten postal code on a letter and sort the mail into different compartments depending on the destination address. An AI system uses a camera to simultaneously recognize the name of the city and the addressee as well as the exact address, in addition to the postal code, and thus works much faster and makes fewer mistakes by checking the information for consistency. Another example is a doctor's decision to treat a patient. In addition to years of training, this requires a lot of experience. In the future, doctors will be increasingly supported in their decision-making by AI - for example, in the analysis of medical X-ray or ultrasound images. AI systems can detect abnormalities more quickly here and do not tire: they have no problem searching through hundreds of CT scans for tiny lung nodules.

Like humans, AI systems learn from past examples to perform better in the future. What is needed is a training process. One process is supervised learning. Take the example of mail sorting: for training, the AI system is fed example pairs - consisting of the image of a handwritten digit and the same typed digit. Over time, the computer learns to predict the correct digit on a new image. Another learning method is reinforcement learning. Here, the computer is rewarded by positive feedback when it correctly solves a task. Take chess, where computers are now superior to any human champion. The learning principle is similar to training a pet: we can't really show dogs how to retrieve a stick, they have to learn that on their own. The only signal humans can give is a success-related recognition like a treat. Likewise, a baby doesn't say what it doesn't like: it cries. Parents have to figure out for themselves what to do. Also a form of reinforcing learning!


There is often talk of strong and weak AI. What does that mean?

Volker Tresp: No one would claim that a machine that can only capture the address on a letter has real intelligence. This is what is known as weak AI: the computer only outperforms humans in a specific task. Strong AI, on the other hand, has something like human intelligence. One way of testing this is the so-called Turing test - named after Alan Turing, a pioneer of computer science. In this test, a human asks a computer system a question. If he cannot tell from the answers whether he has communicated with another human or with an AI, this would be a sign of a strong AI. Now the question arises: Are voice assistance systems like Siri or Alexa already considered strong AI systems because - fed with content from the Internet - they can answer almost all questions correctly? Or is this pure parroting? For all the progress that has been made in AI, especially in the past ten years: Most experts believe that we are still a long way from a strong AI.


What are the opportunities and threats associated with AI?

Volker Tresp: The spectrum of AI applications is growing every day. It ranges from translation programs to recommendation systems for online shopping and medical image analysis to future autonomous driving. Driverless subways are already on the road in some cities. The majority of today's AI applications pose no threat. If a letter is sent to the wrong address or the wrong product is recommended to me, that's not dangerous. If an AI-based vehicle overlooks a child in traffic, of course, this is infinitely more dramatic - this is where AI developers are challenged. Another question concerns the fairness of AI-based recommendations: As they become more and more part of our everyday lives, they should not disadvantage any segment of the population; the goal must be trustworthy AI. In general, AI systems are data hungry and need to be trained with a lot of data, including personal (possibly: anonymized personal) data. These must not fall into the wrong hands - effective data protection is needed here. We should also ask ourselves what AI is supposed to achieve in our working world. It can take monotonous and dangerous work off our hands, but humans must not be patronized and manipulated either. It is important that we as a society discuss how and for what purpose we want to use AI.


More information on the fundamentals, applications and challenges of AI can be found on the Plattform Lernende Systeme website: www.ki-konkret.de (in German)

Das Interview ist für eine redaktionelle Verwendung freigegeben (bei Nennung der Quelle © Plattform Lernende Systeme).

Go back