Projects per year
Abstract
Computer vision datasets have proved to be key instruments to interpret visual data. This article concentrates on benchmark datasets which are used to define a technical problem and provide with a common referent against which different solutions can be compared. Through three case studies, ImageNet, the Pilot Parliaments Benchmark dataset and VizWiz, the article analyzes how photography is mobilised to conceive interventions in computer vision. The benchmark is a privileged site where photographic curation is tactically performed to change the scale of visual perception, oppose racial and gendered discrimination or rethink image interpretation for visually impaired users. Through the elaboration of benchmarks, engineers create curatorial pipelines involving large chains of heterogeneous actors exploiting various photographic practices from amateur snapshot to political portraiture or photography made by blind users. The article contends that the mobilisation of photography in the benchmark goes together with a multifaceted notion of vulnerability. It analyzes how various forms of vulnerabilities and insecurities pertaining to users, software companies, or vision systems are framed and how benchmarks are conceived in response to them. Following the alliances that form around vulnerabilities, the text explores the potential and limits of the practices of benchmarking in computer vision.
Original language | English |
---|---|
Journal | Photographies |
Volume | 16 |
Issue | 2 |
Pages (from-to) | 173-189 |
Number of pages | 17 |
ISSN | 1754-0763 |
DOIs | |
Publication status | Published - May 2023 |
Fingerprint
Dive into the research topics of 'Practices of benchmarking: Vulnerability in the computer vision pipeline'. Together they form a unique fingerprint.Projects
- 1 Active
-
Artistic Practice under Contemporary Conditions
Lund, J. (PI), Maleve, N. R. M. (Participant) & Ørskov, M. (Participant)
01/03/2022 → 28/02/2026
Project: Research