How is AI changing society?
Prof. Dr. Jessica Heesen, Head of the Media Ethics, Philosophy of Technology and AI Research Group at the International Centre for Ethics in the Sciences and Humanities (IZEW) at the University of Tübingen.
Thanks to generative AI, it is easier than ever to create texts, images or videos according to individual specifications. This opens up enormous opportunities – and risks.
1. How does AI affect social decision-making processes?
Jessica Heesen: Artificial intelligence is used to support a wide variety of decision-making processes of varying social relevance. On a small scale, for example, it is used to optimally distribute schoolchildren among different primary schools. Large, structurally significant decision-making processes concern, for example, what we see on social media or how traffic flows are directed. When algorithmic decision-making systems directly influence political control, this is referred to as algorithmic governance. Examples include predictive policing, applications for health, energy and water supply in smart city concepts, and systems for automated border control. In all these applications, AI systems are intended to relieve humans and support their decisions, particularly through pattern recognition – but not to replace them. This is an important point. Human oversight and decision-making authority must always be maintained from a legal and ethical perspective. In practice, however, decisions are often delegated entirely to AI systems due to time constraints, lack of expertise and misplaced trust.
2. What skills should people have in order to recognize AI content and critically evaluate it?
Jessica Heesen: Here, too, we need to consider the huge range of AI applications. On the one hand, we are currently seeing a boom in AI-generated image and video content, which is mainly used on social media such as TikTok. This is referred to as AI spam and AI slop because this content is cheaply produced and designed solely to generate quick clicks. Such content has an indirect effect on democracy because it cheapens public communication services and attracts attention with irrelevant machine-generated content. On the other hand, we see AI-generated summaries in search engines which, if they are correct, make it easier or even barrier-free to access knowledge in a compact and low-threshold way. However, when it comes to AI content, whether meaningful or questionable, we must always be aware that it is based on content or data that was originally created and conceived by humans. This data pool could increasingly be replaced by AI content, which in turn is used by generative AI on the internet. As a result, the verifiability of the content is lost and its factual accuracy becomes increasingly questionable. An example of the recursive use of AI-generated content by AI in 2023 was the systematic misrepresentation of peacock chicks, which found its way into established search engines.
3. Creating and spreading fake news and deepfakes is easier than ever with AI. How does this affect our society?
Jessica Heesen: The spread of disinformation is a serious threat to democracy. AI applications in social media can even support forms of hybrid warfare. These can be deliberately disseminated audio and video recordings manipulated by AI that serve as supposed evidence, or artificially generated opinion leaders in text-based forums. People are still accustomed to placing a great deal of trust in videos and audio recordings. But the days when you could trust your eyes or believe that a voice was unmistakable are over. Even though images and familiar voices have a strong suggestive power, we must reckon with the possibility that they have been manipulated or generated by AI. But it is not only obvious AI fakes that are a problem; the use of AI for personalisation and microtargeting is also an issue. AI perfects the digital surveillance infrastructures that are already established by the platform economy and enables the targeted delivery of specific content to people who are particularly receptive to it. This can lead to tailored and desired information, but also to a manipulative and selective selection of content.
4. How can we ensure that citizens benefit more from AI than they are harmed by it?
Jessica Heesen: I view AI applications for communication as media. This means, among other things, that they only show a certain part of reality and can spread misinformation. However, they can also be used to convey information more effectively and contribute to public understanding. Central to such democratic use would be for AI for public communication infrastructures to be in the hands of its users. The democratic function of media is to support the dissemination of information, the formation of opinion and understanding in accordance with recognised quality standards. For AI in public communication, this means supporting formats and content that prioritise relevant content, groups and initiatives for the common good. At present, content that generates particularly high advertising revenue in its environment is given preference. The current infrastructure for AI can be compared to a road network owned by large US companies that earn a lot of money from it. We are allowed to drive on it, but they determine the routes to our destination, the traffic light sequences and the speed. As a society, we would never tolerate this, but when it comes to online communication, we have unfortunately accepted this situation as normal.