LLMs Highlight the Importance of Interaction in Human Language Learning

Research output: Contribution to journal/Conference contribution in journal/Contribution to newspaperJournal articleResearchpeer-review

Abstract

Recent years have seen Large Language Models (LLMs) achieve impressive performance with core linguistic abilities. This can be taken as a demonstration that, contrary to long-held assumptions about innate linguistic constraints, language can be learned through statistical learning from linguistic input alone. However, human language acquisition evolved not simply as passive absorption of linguistic input but is instead fundamentally interactive, guided by continuous feedback and social cues. Recent advances in LLM engineering have introduced an additional step that utilizes more human-like feedback in what has come to be known as Reinforcement Learning from Human Feedback (RLHF). This procedure results in models which more closely mirror human linguistic behaviors - even reproducing characteristic human-like errors. We argue that the way RLHF changes the behavior of LLMs highlights how communicative interaction and socially informed feedback, in addition to input-driven statistical learning alone, can explain fundamental aspects of language learning. In particular, we take LLMs as models of “idealized statistical language learners” and RLHF as a form of “idealized language feedback”, showing that this perspective offers valuable insights into our understanding of human language development.
Original languageEnglish
JournalLinguistics Vanguard : multimodal online journal
ISSN2199-174X
Publication statusAccepted/In press - Mar 2025

Fingerprint

Dive into the research topics of 'LLMs Highlight the Importance of Interaction in Human Language Learning'. Together they form a unique fingerprint.

Cite this