3 Questions for

Jessica Heesen

Head of the research focus Media Ethics and Information Technology at the Ethics Centre of the University of Tübingen and head of the working group "IT Security, Privacy, Legal and Ethical Framework" of Plattform Lernende Systeme

3 Questions for Jessica Heesen

Focusing on the common good: How we shape AI responsibly

Artificial Intelligence is a powerful tool that can be used to improve processes and develop new products and services. Many things are technically possible. But with what goal do we want to use AI as a society? A broad socio-political discourse on this is necessary to create framework conditions within which technical applications can be successfully developed and implemented. Jessica Heesen explains in an interview how this can succeed. The philosopher and media ethicist heads the research focus on media ethics and information technology at the International Centre for Ethics in the Sciences and Humanities at the University of Tübingen as well as the working group "IT Security, Privacy, Legal and Ethical Framework" of Plattform Lernende Systeme.

1

Ms Heesen, how can Artificial Intelligence be used in a targeted way for the benefit of people?

Jessica Heesen: That's not so easy to say at first, as long as we don't agree on what the good of human beings is. The concept of homo oeconomicus, for example, is widespread, in which humans orient their actions towards the goal of personal and financial profit. If AI were to be used according to this conception of man, exactly what we can observe at present would happen: Companies use AI to maximise profits, among other things by creating overconsumption. They succeed in this in online retail, for example, through personalised advertising or automated recommendations.
However, I don't believe that humans are fundamentally only interested in profits. Rather, they are social beings who want to live in a community and strive for certain values - such as social recognition. If one wants to use AI in the sense of this human image, social and ecological issues come to the fore. The incentive of the individual to produce AI systems for socio-ecological sustainability could therefore be social recognition instead of profit. So what we need first is an incentive system on the basis of which AI is used. The state has numerous instruments at its disposal for this, for example the promotion of voluntary work or so-called Civic Technologies with competitive tenders.

2

How can we lead the wider society into a future with AI?

Jessica Heesen: First of all, of course, you have to say that AI systems are already being used today in a wide variety of areas, so it is by no means a question for the future. Nevertheless, many people have reservations about AI. It is often said that one has to trust - as if the citizens are obliged to do so. But the truth is: in order to create well-founded trust, mistrust must first be accepted, because only then can users form their own picture of the opportunities and risks of the applications. So it's all about transparency. But transparency must also not be understood as an end in itself. Only if individuals also have an actual choice to decide against using the application is transparency a confidence-building measure.
Another measure is the certification of AI. For users, data protection is an important issue, but certificates are also a good steering element for ecological sustainability, for example through a voluntary commitment in industry. However, it is very difficult to agree on common criteria for the certification of AI, especially since there are repeated warnings that too strict regulation slows down the economy and innovation.

3

It seems that the interests of the economy and the common good do not always coincide. Do we need more intervention in the free market economy when it comes to AI?

Jessica Heesen: We don't really have a free market economy in this matter, because there are strong monopolies on the market when it comes to AI. Google is just one example. And yes, intervention can be part of the solution here. We should not leave it up to corporations to decide what sustainable and equitable development in digitalisation should look like. Instead, we should strive for a plural economy in which individual benevolent developers can also assert themselves. That is not the case at the moment.
Of course, we also need to talk about what happens to the profits generated by AI. The High Level Expert Group on Artificial Intelligence, for example, advocates in its ethical guidelines that profits be distributed fairly in society. It should be emphasised that AI does not only benefit companies and wealthy individuals, but can also strengthen social welfare and provide a benefit for all. Then the approval for AI among the population will also grow.

A detailed version of this interview as well as other interviews can be found in the volume AI and Sustainability, written by Christiane Schulzki-Haddouti for Plattform Lernende Systeme. Download the document (in German) here.

This interview is released for editorial use (if the source is named © Plattform Lernende Systeme).

Go back