Explainable AI
White paper
With the rapid spread of chatbots, artificial intelligence (AI) has become tangible for many people in their everyday lives. How and why ChatGPT and other AI-based systems achieve their results often remains opaque to users. What exactly happens in the ‘black box’ between model input and output? In order to make the results and decision-making of complex AI systems comprehensible, algorithmic decisions must be explainable. This can improve model quality on the one hand and strengthen trust in AI on the other. A current white paper from Plattform Lernende Systeme shows which methods and tools can be used to make AI results comprehensible for different target groups and provides design options for research, teaching, politics and companies.
Read the press release here.