More opportunities than threats: Artificial Intelligence in elections
We encounter artificial intelligence (AI) in all areas of life - including democratic elections. AI-controlled social bots in social media or deepfakes aim to influence the election decision. On the other hand, AI systems can provide people with individual voting recommendations and assist in detecting fake news. In a recent white paper, experts from Plattform Lernende Systeme examine which AI applications can be used in elections and where their potential and challenges lie. Their conclusion: AI systems hardly pose a threat to elections. On the contrary, the opportunities for using AI to strengthen open opinion-forming in the run-up to elections outweigh the threats.
Fake news or disinformation campaigns make it difficult to form opinions in the run-up to elections. With the help of artificial intelligence, this information can be disseminated more efficiently and in a more targeted manner, especially in social media. The manipulation of elections by AI systems is therefore one of the fears repeatedly expressed by society, according to the white paper "AI systems and the individual electoral decision".
According to the authors, AI systems do not pose any risks to the electoral process itself so far. "In particular, voting is very secure in Germany due to the absence of voting computers and similar technology. Isolated possible attacks on the evaluation of votes have nothing to do with AI. Where AI poses risks is in election campaigns and for opinion formation before elections," says Tobias Matzner, Professor of Media, Algorithms and Society at the University of Paderborn and member of the working group "IT Security, Privacy, Legal and Ethical Framework" of Plattform Lernende Systeme.
Professor for Media, Algorithms and Society at Paderborn University
Risks to opinion forming, not voting. "AI does not currently pose too great a risk to elections. In particular, voting in Germany is very secure due to the absence of voting computers and similar technology. Isolated possible attacks on the evaluation of votes are known IT security issues and have nothing to do with AI. Where AI poses risks is in election campaigns and for opinion forming before elections."
Risks for opinion formation
One of the risks is AI-driven dissemination of information. AI systems can run fake accounts on social media, tagging or sharing content and making their manipulative behavior look like it came from a human. In this way, the so-called social bots help false information or certain people gain wide reach. In microtargeting, AI processes are used to analyze user data and - similar to advertising - address different target groups with personalized information. However, the extent to which voting decisions can be influenced in this way has not yet been clarified. Videos and images faked with AI can also influence the opinion formation of eligible voters. Once unmasked, however, AI systems are also capable of finding and deleting these deepfakes.
Head of the research area Media Ethics and Information Technology at the IZEW of the Eberhard Karls University Tübingen
AI contributes to trustworthy election information. "In the run-up to elections, the risk of intentionally deployed disinformation campaigns on the Internet to spread misleading information increases. Social bots are often used to amplify this. Nevertheless, AI systems in particular can make a contribution to the detection of false news. To one-sided content or in algorithmic filter bubbles, they can also provide alternative links and counterarguments and contribute overall to a more diverse and trustworthy form of information provision."
In addition to deepfakes, other risks can also be countered with the help of AI tools. "Especially in the run-up to elections, the risk of intentionally deployed disinformation campaigns on the Internet to spread misleading information is increasing. Social bots are often used to amplify this. Nevertheless, AI systems in particular can make a valuable initial contribution to the detection of false news," says Jessica Heesen, a media ethicist at Eberhard Karls University in Tübingen and head of the "IT Security, Privacy, Legal and Ethical Framework" working group of Plattform Lernende Systeme. Alternative links and counterarguments can also be provided for one-sided content or in algorithmic filter bubbles.
The data platform operator is responsible for establishing data protection measures such as data economy and anonymization or pseudonymization mechanisms, they say. The experts recommend agreeing on technical and contractual security rules with data holders for the use of the mobility platform and continuously checking compliance with these rules through independent certification and audits.
Head of the AI Governance project at the University of Duisburg-Essen.
More AI in the electoral process possible. "In the digital world, the political process now also produces a great deal of data. Increasingly, large amounts of data are the basis for automated evaluation processes, in which forms of machine learning can also be applied. So far, AI systems have hardly been used in the environment of political elections - however, campaign or election recommendation apps as well as other forms of digital election preparation suggest that this could change in the future."
Using AI to improve voter information
Political processes today generate large amounts of data that can be analyzed using AI methods. The authors of the white paper emphasize the potential of AI-based data analysis to improve voter information and mobilization. This has not yet been exhausted, they say. As an example, the white paper cites election recommendation apps such as Wahl-O-Mat. Current apps could use AI methods to take more account of people's individual attitudes and improve their recommendations with each use. Party campaign apps and election forecasts can also benefit from AI.
Professor of Philosophy of Technology at the Karlsruhe Institute of Technology (KIT), Head of the Office of Technology Assessment at the German Bundestag (TAB)
AI is not an end in itself. "AI is often seen as a threat to democracy, primarily because of the feared manipulation of voter will. Conversely, AI enthusiasts call for AI systems to be used to improve democratic electoral processes simply because they are available. Contrary to sweeping expectations as well as fears, however, it is necessary to ask specifically what AI can contribute under what conditions to support democratic elections and remedy any problems."
In order to realize the opportunities of AI systems for open opinion formation and to mitigate risks, the authors address design options. For example, they recommend further legal restrictions on microtargeting, such as mandatory labeling. For the consistent prosecution of criminal offenses in social media, the experts call for better staffing of law enforcement agencies and the judiciary. In addition, people's competencies for evaluating information on the Internet must be strengthened.
Professor of Public Law at the University of Kassel, Hessian Commissioner for Data Protection and Freedom of Information
Legal framework to protect fundamental rights. "The application of Artificial Intelligence in the context of elections can improve the conditions for the realization of fundamental rights, for example by supporting the freedom to form and express opinions. However, it can equally worsen them if, for example, it is used to manipulate the will of voters without detection. To ensure that such applications do not jeopardize the objective of informed and free elections, a legal framework is needed that promotes a socially beneficial use of Artificial Intelligence."
About the white paper
The white paper AI systems and the individual electoral decision was written by experts from the IT Security, Privacy, Legal and Ethical Framework working group of Plattform Lernende Systeme. The executive summary (in English) can be found here.
Linda Treugut / Birgit Obermeier
Press and Public Relations
Lernende Systeme – Germany's Platform for Artificial Intelligence
Managing Office | c/o acatech
Karolinenplatz 4 | 80333 Munich
T.: +49 89/52 03 09-54 /-51
M.: +49 172/144 58-47 /-39