Chatbots Are Not Reliable Text Annotators

Ross Deans Kristensen-McLachlan*, Miceal Canavan, Márton Kardos (Medlem af forfattersamarbejde), Mia Jacobsen (Medlem af forfattersamarbejde), Lene Aarøe

*Corresponding author af dette arbejde

Publikation: Working paper/Preprint Preprint

Abstract

Recent research highlights the significant potential of ChatGPT for text annotation in social science research. However, ChatGPT is a closed-source product which has major drawbacks with regards to transparency, reproducibility, cost, and data protection. Recent advances in open-source (OS) large language models (LLMs) offer alternatives which remedy these challenges. This means that it is important to evaluate the performance of OS LLMs relative to ChatGPT and standard approaches to supervised machine learning classification. We conduct a systematic comparative evaluation of the performance of a range of OS LLM models alongside ChatGPT, using both zero- and few-shot learning as well as generic and custom prompts, with results compared to more traditional supervised classification models. Using a new dataset of Tweets from US news media, and focusing on simple binary text annotation tasks for standard social science concepts, we find significant variation in the performance of ChatGPT and OS models across the tasks, and that supervised classifiers consistently outperform both. Given the unreliable performance of ChatGPT and the significant challenges it poses to Open Science we advise against using ChatGPT for substantive text annotation tasks in social science research.
OriginalsprogEngelsk
Udgiverarxiv.org
DOI
StatusUdgivet - 2023

Fingeraftryk

Dyk ned i forskningsemnerne om 'Chatbots Are Not Reliable Text Annotators'. Sammen danner de et unikt fingeraftryk.

Citationsformater