AI and elections: Threat or Opportunity for Democracy?

Artificial Intelligence (AI) is increasingly changing the way we live. AI systems are also being used in democratic elections. In the run-up to the 2021 Bundestag elections, Plattform Lernende Systeme organised a virtual background discussion in cooperation with the Science Press Conference on 24 June to discuss the opportunities, but also the risks, of AI in relation to elections.

From left to right: Armin Grunwald (KIT), Christoph Bieber (University Duisburg-Essen), Jessica Heesen (University Tübingen) und Tobias Matzner (University Paderborn).

AI can analyse the parties' election programmes for voters, assist in pandemic election campaign organisation and detect misinformation on social media platforms. On the other hand, human election decisions can be influenced by AI systems with deep fakes or chat bots. At the virtual background discussion "AI and elections: Threat or Opportunity for Democracy?", the experts of Plattform Lernende Systeme discussed, for example, which concrete AI applications are used in elections and how they can contribute to strengthening democracy. The panel was moderated by science journalist Eva Wolfangel.

Analyse opportunities and challenges of AI in elections with an open mind

As is so often the case, AI is also assessed sweepingly as a problem solver or as a threat in the context of elections, explains Armin Grunwald, Professor of Philosophy of Technology at the Karlsruhe Institute of Technology (KIT), Head of the Office of Technology Assessment at the German Bundestag (TAB) and member of the IT Security, Privacy, Legal and Ethical Framework of Plattform Lernende Systeme. However, this simplistic debate between far-reaching hope on the one hand and fear of manipulation and loss of control on the other does not help. Instead, the focus should be on a sober and open-minded analysis of AI-based technical possibilities to support electoral processes.

Christoph Bieber, head of the AI Governance project at the University of Duisburg-Essen and member of the IT Security, Privacy, Legal and Ethical Framework group of Plattform Lernende Systeme, emphasised above all the opportunities of AI for democratic elections. For example, AI can be used in election information and provide personalised recommendations on parties or candidates that correspond to voters' political attitudes. In addition, parties are increasingly using "campaign apps" with which their members collect data on campaign events, which are analysed using machine learning. Protecting this data is a new challenge for politics and society, Bieber said.

Meanwhile, the risks associated with AI systems in elections have been relatively low so far, said Tobias Matzner, professor of media, algorithms and society at the University of Paderborn and member of the IT Security, Privacy, Legal and Ethical Framework working group of Plattform Lernende Systeme. The future threats mainly concern the formation of opinions, Matzner said. Through the automated recommendation of videos or news in the feeds of social media, manipulative content can always find its way to a new audience. With the help of AI, the behaviour of users can also be analysed and manipulative content can be spread in an even more targeted manner. Matzner saw a growing risk in AI-generated fake videos and images, so-called deepfakes.

Artificial intelligence as an antidote to AI threats

For some of the threats, in turn, Artificial Intelligence itself offers solutions, explained Jessica Heesen, a media ethicist at the University of Tübingen and head of the IT-security, privacy, legal and ethical framework working group of Plattform Lernende Systeme. For example, she said, AI systems could detect deepfakes by identifying content that already existed and was used for the fake. In addition, AI could compensate for biased media use in so-called filter bubbles by allowing the system to recognize the individual's predominant usage pattern and recommend contrarian content in a targeted manner.

In September, the white paper "AI and Elections" of Plattform Lernende Systeme will be published, in which, among others, the four experts of the background discussion are involved.

Further information:

Linda Treugut / Birgit Obermeier
Press and Public Relations

Lernende Systeme – Germany's Platform for Artificial Intelligence
Managing Office | c/o acatech
Karolinenplatz 4 | 80333 Munich

T.: +49 89/52 03 09-54 /-51
M.: +49 172/144 58-47 /-39
presse@plattform-lernende-systeme.de

Go back