Aarhus University Seal

A Bayesian meta-analysis of infants' ability to perceive audio-visual congruence for speech

Research output: Contribution to journal/Conference contribution in journal/Contribution to newspaperJournal articleResearchpeer-review

DOI

This paper quantifies the extent to which infants can perceive audio-visual congruence for speech information and assesses whether this ability changes with native language exposure over time. A hierarchical Bayesian robust regression model of 92 separate effect sizes extracted from 24 studies indicates a moderate effect size in a positive direction (0.35, CI [0.21: 0.50]). This result suggests that infants possess a robust ability to detect audio-visual congruence for speech. Moderator analyses, moreover, suggest that infants' audio-visual matching ability for speech emerges at an early point in the process of language acquisition and remains stable for both native and non-native speech throughout early development. A sensitivity analysis of the meta-analytic data, however, indicates that a moderate publication bias for significant results could shift the lower credible interval to include null effects. Based on these findings, we outline recommendations for new lines of enquiry and suggest ways to improve the replicability of results in future investigations.

Original languageEnglish
JournalInfancy
Volume27
Issue1
Pages (from-to)67-96
Number of pages30
ISSN1525-0008
DOIs
Publication statusPublished - Jan 2022

    Research areas

  • INTERMODAL REPRESENTATION, INTERSENSORY PERCEPTION, SELECTIVE ATTENTION, CHAINED EQUATIONS, DIRECTED SPEECH, FACE, INFORMATION, INTEGRATION, VOICE, EXPERIENCE

See relations at Aarhus University Citationformats

ID: 223902823