TY - JOUR
T1 - LLMs Highlight the Importance of Interaction in Human Language Learning
AU - Kristensen-McLachlan, Ross Deans
AU - Contreras Kallens, Pablo Andres
AU - Christiansen, Morten H.
PY - 2025/3
Y1 - 2025/3
N2 - Recent years have seen Large Language Models (LLMs) achieve impressive performance with core linguistic abilities. This can be taken as a demonstration that, contrary to long-held assumptions about innate linguistic constraints, language can be learned through statistical learning from linguistic input alone. However, human language acquisition evolved not simply as passive absorption of linguistic input but is instead fundamentally interactive, guided by continuous feedback and social cues. Recent advances in LLM engineering have introduced an additional step that utilizes more human-like feedback in what has come to be known as Reinforcement Learning from Human Feedback (RLHF). This procedure results in models which more closely mirror human linguistic behaviors - even reproducing characteristic human-like errors. We argue that the way RLHF changes the behavior of LLMs highlights how communicative interaction and socially informed feedback, in addition to input-driven statistical learning alone, can explain fundamental aspects of language learning. In particular, we take LLMs as models of “idealized statistical language learners” and RLHF as a form of “idealized language feedback”, showing that this perspective offers valuable insights into our understanding of human language development.
AB - Recent years have seen Large Language Models (LLMs) achieve impressive performance with core linguistic abilities. This can be taken as a demonstration that, contrary to long-held assumptions about innate linguistic constraints, language can be learned through statistical learning from linguistic input alone. However, human language acquisition evolved not simply as passive absorption of linguistic input but is instead fundamentally interactive, guided by continuous feedback and social cues. Recent advances in LLM engineering have introduced an additional step that utilizes more human-like feedback in what has come to be known as Reinforcement Learning from Human Feedback (RLHF). This procedure results in models which more closely mirror human linguistic behaviors - even reproducing characteristic human-like errors. We argue that the way RLHF changes the behavior of LLMs highlights how communicative interaction and socially informed feedback, in addition to input-driven statistical learning alone, can explain fundamental aspects of language learning. In particular, we take LLMs as models of “idealized statistical language learners” and RLHF as a form of “idealized language feedback”, showing that this perspective offers valuable insights into our understanding of human language development.
M3 - Journal article
SN - 2199-174X
JO - Linguistics Vanguard : multimodal online journal
JF - Linguistics Vanguard : multimodal online journal
ER -