Epistemic consequences of unfair tools

Research output: Contribution to journal/Conference contribution in journal/Contribution to newspaperJournal articleResearchpeer-review

1 Citation (Scopus)

Abstract

This article examines the epistemic consequences of unfair technologies used in digital humanities (DH). We connect bias analysis informed by the field of algorithmic fairness with perspectives on knowledge production in DH. We examine the fairness of Danish Named Entity Recognition tools through an innovative experimental method involving data augmentation and evaluate the performance disparities based on two metrics of algorithmic fairness: calibration within groups; and balance for the positive class. Our results show that only two of the ten tested models comply with the fairness criteria. From an intersectional perspective, we shed light on how unequal performance across groups can lead to the exclusion and marginalization of certain social groups, leading to voices and experiences being disregarded and silenced. We propose incorporating algorithmic fairness in the selection of tools in DH to help alleviate the risk of perpetuating silence and move towards fairer and more inclusive research.

Original languageEnglish
JournalDigital Scholarship in the Humanities
Volume39
Issue1
Pages (from-to)198–214
Number of pages17
ISSN2055-7671
DOIs
Publication statusPublished - Apr 2024

Keywords

  • algorithmic fairness
  • bias
  • intersectionality
  • knowledge production
  • named entity recognition
  • natural language processing

Fingerprint

Dive into the research topics of 'Epistemic consequences of unfair tools'. Together they form a unique fingerprint.

Cite this