Practices of benchmarking: Vulnerability in the computer vision pipeline

Publikation: Bidrag til tidsskrift/Konferencebidrag i tidsskrift /Bidrag til avisTidsskriftartikelForskningpeer review

Abstract

Computer vision datasets have proved to be key instruments to interpret visual data. This article concentrates on benchmark datasets which are used to define a technical problem and provide with a common referent against which different solutions can be compared. Through three case studies, ImageNet, the Pilot Parliaments Benchmark dataset and VizWiz, the article analyzes how photography is mobilised to conceive interventions in computer vision. The benchmark is a privileged site where photographic curation is tactically performed to change the scale of visual perception, oppose racial and gendered discrimination or rethink image interpretation for visually impaired users. Through the elaboration of benchmarks, engineers create curatorial pipelines involving large chains of heterogeneous actors exploiting various photographic practices from amateur snapshot to political portraiture or photography made by blind users. The article contends that the mobilisation of photography in the benchmark goes together with a multifaceted notion of vulnerability. It analyzes how various forms of vulnerabilities and insecurities pertaining to users, software companies, or vision systems are framed and how benchmarks are conceived in response to them. Following the alliances that form around vulnerabilities, the text explores the potential and limits of the practices of benchmarking in computer vision.
OriginalsprogEngelsk
TidsskriftPhotographies
Vol/bind16
Nummer2
Sider (fra-til)173-189
Antal sider17
ISSN1754-0763
DOI
StatusUdgivet - maj 2023

Fingeraftryk

Dyk ned i forskningsemnerne om 'Practices of benchmarking: Vulnerability in the computer vision pipeline'. Sammen danner de et unikt fingeraftryk.

Citationsformater