Ms. Bullinger-Hoffmann, how can AI systems support people with disabilities in the world of work?
Angelika Bullinger-Hoffmann: AI-based technologies can support people with impairments in two ways: by compensating for the (physical) impairment or by designing a barrier-free working environment. The first approach focuses on AI-supported exoskeletons or orthotics that directly and, more importantly, individually augment people's physical abilities. This enables the integration of people with impairments into existing work processes. In addition, these people can then also take on new or expanded roles, tasks and activities, which can be beneficial to mental health. For example, AI-based exoskeletons that support mechanics in back-breaking activities depending on their individual abilities are conceivable.
In the design of an accessible work environment, AI systems are used to overcome cognitive impairments and reduce hurdles. AI-based assistance systems take over the control of a process from a certain level of complexity, provide learning support or even offer social coaching.
Both approaches, following the principle of design for all, are not only beneficial for people with impairments, but also for other people. In the case of AI-supported speech software, for example, think of people who speak with an accent, or in the case of AI-supported orthotics, think of elderly people who can thus continue to perform their activities with pleasure.
What conditions need to be created in the companies for this to happen?
Angelika Bullinger-Hoffmann: The AI systems must be designed in such a way that they actually provide support right from the start and do not, for example, overburden users. That's why it's important to get employees on board early on when it comes to introducing such technology in the workplace. In ergonomics, this principle is called user integration, and it has been shown in many cases that early involvement of future users increases acceptance, facilitates the introduction and reduces the need for subsequent improvements. Especially with such a "personal" technology as an AI-controlled exoskeleton, the integration of employees from the very beginning is therefore central.
Another major challenge is the data use of the AI systems. Data protection requirements must be observed here, as the systems (must) work with sensitive and personal data. Transparent communication with employees about which data is collected by whom and where, and how it is used, is essential here.
Finally, the use of AI technologies can of course also build up dependencies, so that users can no longer live and work well without them because they become accustomed to the support. This dependency can be built up on the provider of the respective system, but also on the workplace at which this system is provided and can be used.
AI systems can also further exclude people with disabilities, for example by completely taking over simpler tasks. How can exclusion be avoided when using AI?
Angelika Bullinger-Hoffmann: It's true: the division of labor between humans and AI will lead to tasks becoming more interdisciplinary, more communicative, more varied, but also more complex. This goes hand in hand with higher demands on the competencies of the employees - which is particularly challenging for people with learning difficulties or with impairments in social and communicative competence fields.
Two thoughts on this: On the one hand, participation in the world of work must already start with participation in (further) education. If the educational landscape is not designed to be inclusive, there is a lack of the necessary prerequisites for making the world of work itself inclusive.
In this context, realizing participation means perceiving the changing reality of work and designing its prerequisites in such a way that the use of AI does not lead to more exclusion. In addition, the use of AI systems must always be controlled. It has already happened, for example, that AI systems in recruiting make decisions that lead to unjust or unfair results. This can be explained by the fact that AI software treats every applicant equally, even if they are unequal - for example, because one applicant has a learning disability and the other ADHD. This is where human recruiting must step in and evaluate unequal conditions unequally as well, by weighting grades differently, for example. Human fairness is not AI egalitarianism. As long as all AI users are aware of this and act accordingly, exclusion can be avoided when using AI.
The white paper "With AI to more participation in the world of work" is available for free download (in German).
The interview is released for editorial use (if source is acknowledged © Plattform Lernende Systeme).