Projects per year
Abstract
Public sector adoption of AI techniques in welfare systems recasts historic national data as resource for machine learning. In this paper, we examine how the use of register data for development of predictive models produces new ‘afterlives’ for citizen data. First, we document a Danish research project’s practical efforts to develop an algorithmic decision-support model for social workers to classify children’s risk of maltreatment. Second, we outline the tensions emerging from project members’ negotiations about which datasets to include. Third, we identify three types of afterlives for citizen data in machine learning projects: (1) data afterlives for training and testing the algorithm, acting as ‘ground truth’ for inferring futures, (2) data afterlives for validating the algorithmic model, acting as markers of robustness, and (3) data afterlives for improving the model’s fairness, valuated for reasons of data ethics. We conclude by discussing how, on one hand, these afterlives engender new ethical relations between state and citizens; and how they, on the other hand, also articulate an alternative view on the value of datasets, posing interesting contrasts between machine learning projects developed within the context of the Danish welfare state and mainstream corporate AI discourses of the bigger, the better.
Original language | English |
---|---|
Journal | AI & Society |
Number of pages | 11 |
ISSN | 0951-5666 |
DOIs | |
Publication status | E-pub ahead of print - 2024 |
Keywords
- Data afterlives
- Dataset negotiations
- Machine learning
- Welfare state
Fingerprint
Dive into the research topics of 'Citizens’ data afterlives: Practices of dataset inclusion in machine learning for public welfare'. Together they form a unique fingerprint.Projects
- 1 Active
-
ADD: Algorithms, data and democracy
Ratner, H. F. (Collaborator)
09/04/2021 → 28/02/2031
Project: Research