Abstract
This article examines the epistemic consequences of unfair technologies used in digital humanities (DH). We connect bias analysis informed by the field of algorithmic fairness with perspectives on knowledge production in DH. We examine the fairness of Danish Named Entity Recognition tools through an innovative experimental method involving data augmentation and evaluate the performance disparities based on two metrics of algorithmic fairness: calibration within groups; and balance for the positive class. Our results show that only two of the ten tested models comply with the fairness criteria. From an intersectional perspective, we shed light on how unequal performance across groups can lead to the exclusion and marginalization of certain social groups, leading to voices and experiences being disregarded and silenced. We propose incorporating algorithmic fairness in the selection of tools in DH to help alleviate the risk of perpetuating silence and move towards fairer and more inclusive research.
Originalsprog | Engelsk |
---|---|
Tidsskrift | Digital Scholarship in the Humanities |
Vol/bind | 39 |
Nummer | 1 |
Sider (fra-til) | 198–214 |
Antal sider | 17 |
ISSN | 2055-7671 |
DOI | |
Status | Udgivet - apr. 2024 |