Artificial Intelligence and Discrimination
Artificial Intelligence (AI) is already being used far more widely today than we might initially expect, and the associated potential for discrimination is not always obvious. Although people themselves are guilty of unjustifiable discrimination at times, they perceive the decisions taken by computer programs and software solutions to often be factual, objective and neutral. However, in reality, AI-based systems sometimes make decisions that are problematic, discriminatory or that draw distinctions without good reason. Many software systems explicitly or implicitly comprise a set of social rules for controlling behaviour, whether in the form of regulations, transactions and coordination, or access and usage rights. First and foremost, they are an effective technical means of putting systems of rules into practice. Consequently, self-learning systems have the potential to not just adopt pre-existing discrimination, but even to enhance it.