Biased AI in Long-Term Care - It‘s more than just in the data

Title: Biased AI in Long-Term Care - It‘s more than just in the data

Autor: Victoria Kontrus, Roger von Laufenberg

Published: 20.09.2024 at Socio-Gerontechnology Network Annual Meeting

Full Text available: n.a.

Citation:

Kontrus, V. & von Laufenberg, R. (2024, Sept 19-20). Biased AI in Long-Term Care: It‘s more than just in the data [Presentation]. Socio-Gerontechnology Network Annual Meeting, Vienna.

Abstract:

This presentation belongs to a three-part series on findings of the AlgoCare project, revolving around the core themes of bias, trustworthiness and fairness of algorithmic systems in long-term care. AlgoCare explores the algorithmic governance of care through three case studies each focusing on a different technology used in long-term care: fall detection systems, social robots and pain assessment tools. This presentation addresses the interrelation of bias and fairness. It analyzes how different forms of biased practices in the algorithmic governance of care emerge in the three case studies and how bias affects the practices of caregivers, their relationships with care recipients as well as the lives of older adults living in long-term care facilities. In doing so, it steers away from more traditional notions of bias such as data bias and sheds a light on forms of bias which thus far remain understudied in research on artificial intelligence.