Vulnerability Assemblages - Situating Vulnerability in the Political Economy of Artificial Intelligence

Title: Vulnerability Assemblages - Situating Vulnerability in the Political Economy of Artificial Intelligence

Autor: Vera Gallistl, Roger von Laufenberg, Katrin Lehner, Victoria Kontrus

Published: 30.08.2024 at European Sociological Association Conference

Full Text available: n.a.

Citation:

Gallistl, V., von Laufenberg, R., Lehner, K. & Kontrus, V. (2024, Aug 27-30). Vulnerability Assemblages: Situating Vulnerability in the Political Economy of AI [Presentation]. European Sociological Association Conference, Porto.

Abstract:

Next to bias, transparency, and fairness, vulnerability is one of the terms recently used to discuss ethical aspects of AI However, current discussions on AI vulnerability, tend to individualize

vulnerability, largely neglecting its political dimensions, that are rooted in systems of inequality, discrimination, and disadvantage. This paper explores how notions of vulnerability underpin the development and implementation of AI. It uses AI systems for older adults in long-term-care institutions (LTC) as one example of how AI for groups that are constructed as vulnerable is created, marketed, and implemented. The paper draws on data from a multiple-perspective qualitative interview study (Vogl et al. 2018). Results uncover how AI designers use narratives around missing data on vulnerable populations as justifications for the creation of synthetic data, which was artificially manufactured rather than generated by real-world events. While this was a profitable business model for AI development companies, these practices of synthetic data creation ultimately situated LTC residents as voiceless in the development of AI. This contribution shows how vulnerability is situated in a political economy of AI which understands the absence of data on vulnerable groups as a possibility of value creation, rather than a chance of fostering inclusion and equality. The paper ends with a critical outlook on a research program of a sociology of AI, that puts vulnerabilities at its center to analyze risks and precarities that emerge when designing and implementing AI systems in diverse contexts, particularly in domains where groups generally are vulnerabilized, like in long-term-care or healthcare.