TY - JOUR
T1 - Are Chatbots Reliable Text Annotators? Sometimes
AU - Kristensen-McLachlan, Ross Deans
AU - Canavan, Miceal
AU - Kardos, Márton
AU - Jacobsen, Mia
AU - Aarøe, Lene
PY - 2025/4/1
Y1 - 2025/4/1
N2 - Recent research highlights the significant potential of ChatGPT for text annotation in social science research. However, ChatGPT is a closed-source product, which has major drawbacks with regards to transparency, reproducibility, cost, and data protection. Recent advances in open-source (OS) large language models (LLMs) offer an alternative without these drawbacks. Thus, it is important to evaluate the performance of OS LLMs relative to ChatGPT and standard approaches to supervised machine learning classification. We conduct a systematic comparative evaluation of the performance of a range of OS LLMs alongside ChatGPT, using both zero- and few-shot learning as well as generic and custom prompts, with results compared with supervised classification models. Using a new dataset of tweets from US news media and focusing on simple binary text annotation tasks, we find significant variation in the performance of ChatGPT and OS models across the tasks and that the supervised classifier using DistilBERT generally outperforms both. Given the unreliable performance of ChatGPT and the significant challenges it poses to Open Science, we advise caution when using ChatGPT for substantive text annotation tasks.
AB - Recent research highlights the significant potential of ChatGPT for text annotation in social science research. However, ChatGPT is a closed-source product, which has major drawbacks with regards to transparency, reproducibility, cost, and data protection. Recent advances in open-source (OS) large language models (LLMs) offer an alternative without these drawbacks. Thus, it is important to evaluate the performance of OS LLMs relative to ChatGPT and standard approaches to supervised machine learning classification. We conduct a systematic comparative evaluation of the performance of a range of OS LLMs alongside ChatGPT, using both zero- and few-shot learning as well as generic and custom prompts, with results compared with supervised classification models. Using a new dataset of tweets from US news media and focusing on simple binary text annotation tasks, we find significant variation in the performance of ChatGPT and OS models across the tasks and that the supervised classifier using DistilBERT generally outperforms both. Given the unreliable performance of ChatGPT and the significant challenges it poses to Open Science, we advise caution when using ChatGPT for substantive text annotation tasks.
KW - Natural Language Processing
KW - Open Science
KW - data annotation
KW - large language models
KW - social sciences
UR - http://www.scopus.com/inward/record.url?scp=105001712296&partnerID=8YFLogxK
U2 - 10.1093/pnasnexus/pgaf069
DO - 10.1093/pnasnexus/pgaf069
M3 - Journal article
C2 - 40171238
SN - 2752-6542
VL - 4
JO - PNAS Nexus
JF - PNAS Nexus
IS - 4
M1 - pgaf069
ER -