3 Questions for

Thomas Schauf

Senior Experte Public and Regulatory Affairs at Deutsche Telekom AG and member of the working group IT Security, Privacy, Legal and Ethical Framework of the Plattform Lernende Systeme

3 Questions for Thomas Schauf

Focus on the patient: Safe AI systems in medicine

Intelligent assistance systems in health care can support doctors in prevention, diagnosis and therapy decisions. An essential prerequisite: the security of the AI systems is guaranteed. Thomas Schauf, Senior Expert Public and Regulatory Affairs at Deutsche Telekom AG, together with other members of the Plattform Lernende Systeme, has examined data management and IT security when using AI in medicine in a current white paper. In our interview he explains how patient data can be protected and how access to sensitive information can be regulated.


The development of AI-based medical systems requires patient data. How can this data be made available in a secure way?

Thomas Schauf: In the Plattform Lernende Systeme, we discussed intensively the question of how necessary data can be made available on the one hand and how it can be processed securely and safely on the other. Data protection, transparency and traceability of the decisions of the AI system play a major role here. However, the trust of patients in the AI system will be of crucial importance for them to make their data available in the first place. With the electronic patient file (ePA), for example, it is important that people can decide for themselves which of their data should be stored and which data they would rather not make available to certain doctors. Trust is of course also based on the IT security of the systems. In our white paper, we discuss the various technical options for secure database systems in this specific context, for example.

Further research is needed on how such secure AI systems should be structured. The central question is: Who would be the operator of such an AI assistance system? We do not see health insurance companies issuing electronic health cards here, as they also have a vested interest as economic profiteers, which could have a negative impact on the trust dimension. We think that we need the trustworthy third party as intermediary here. What is needed is a secure system in which the different interests can participate, such as the research institutions and also the researching pharmaceutical companies, without putting self-interest above patient welfare. This will certainly also be an issue that we will have to discuss with the Federal Ministry of Health.


In the application scenario "With Artificial Intelligence against cancer", which you analyze in your white paper, data from various sources are combined. How can this step be taken safely?

Thomas Schauf: We propose to pool the data virtually. This way, the data does not have to be pooled in a physical central location, but a kind of patient clone or digital twin is built, which is then available as a training data set. We have also asked ourselves the question whether the data must actually always come to the algorithm so that this algorithm can learn, or whether the algorithm can also come to the data. The algorithm then moves from patient to patient and collects the data properties for its training and learning purposes.


What data is required for AI-assisted cancer treatment? Who should have access to it?

Thomas Schauf: From a data technology perspective, I would say the more data, the better. So we look for patterns of data in specific contexts. In other words, consequently: The more data I get on people's behaviour, the better I can recognise patterns, make predictions for precautionary measures and derive probabilities from them. Then doctors would not only be able to derive decisions for treatment, but perhaps also address the question of prevention. Of course, this presupposes that patients want a high degree of data transparency. This is not necessarily the case. In the Plattform Lernende Systeme we have therefore initially focused on the narrower treatment context. In other words, we want to define the framework for a self-learning system in which the diagnostic data of patients available at the family doctor, the specialist or in the hospitals can be made available to all attending physicians equally. And then, of course, there is the question of what a role and rights model can look like that can be used to manage access rights.

The patient, as sovereign, must always be the final decision maker. Let's look at the role of the family doctor, for example: The patient probably has a relationship of trust with the family doctor, which means that he will have a very central role in advising the patient on how to handle his data. In the future, the family doctor will not only provide medical advice, but also increasingly advise on technological aspects. This requires new skills that doctors will have to acquire. Just like the family doctor, the specialist physician may also feed in data. However, only the technical operator of such an assistance system is allowed to intervene and only if the doctors feeding in data point to incorrect or distorted data in the system. The operator may not change the data without the doctors' request, because we then run the risk that other interests could be pursued. This is where the role of the intermediary as a confidence-building authority, as mentioned above, comes into play. The human being is at the centre. The assistance system must also be designed according to this principle.

Download the white paper "Secure AI systems for medicine - data management and IT security in the cancer treatment of the future" of the Plattform Lernende Systeme.

Go back