Fine-Tuning GPT-3 for Synthetic Danish News Generation

Mina Almasi, Anton Drasbæk Schiønning

Publikation: Bidrag til bog/antologi/rapport/proceedingKonferencebidrag i proceedingsForskningpeer review

Abstract

While GPT-3 has garnered significant attention for its capabilities in natural language generation, research on its use outside of English is still relatively limited. We focus on how GPT-3 can be fine-tuned for generating synthetic news articles in a low-resource language, namely Danish. The model’s performance is evaluated on the dimensions of human and machine detection in two separate experiments. When presented with either a real or GPT-3 generated news article, human participants achieve a 58.1% classification accuracy. Contrarily, a fine-tuned BERT classifier obtains a 92.7% accuracy on the same task. This discrepancy likely pertains to the fine-tuned GPT-3 model oversampling high-likelihood tokens in its text generation. Although this is undetectable to the human eye, it leaves a statistical discrepancy for machine classifiers to detect. We address how decisions in the experimental design favoured the machine classifiers over the human evaluators, and whether the produced synthetic articles are applicable in a real-world context.
OriginalsprogEngelsk
TitelProceedings of the 16th International Natural Language Generation Conference
RedaktørerC. Maria Keet, Hung-Yi Lee, Sina Zarrieß
Antal sider14
ForlagAssociation for Computational Linguistics
Publikationsdatosep. 2023
Sider54–68
ISBN (Trykt)979-8-89176-001-1
DOI
StatusUdgivet - sep. 2023
Begivenhed16th International Natural Language Generation Conference - Prag, Tjekkiet
Varighed: 11 sep. 202315 sep. 2023
Konferencens nummer: 16

Konference

Konference16th International Natural Language Generation Conference
Nummer16
Land/OmrådeTjekkiet
ByPrag
Periode11/09/202315/09/2023

Emneord

  • natural language processing
  • large language models
  • fine-tuning
  • machine learning

Citationsformater