Perceptual processing of a complex musical context: Testing a more realistic mismatch negativity paradigm

Research output: Contribution to conferenceConference abstract for conferenceResearchpeer-review

Background: One striking property of the human auditory system is its capacity to transform a simple acoustic wave into a complex percept such as music. While the exact mechanisms of this process are unknown, the evidence suggests that the formulation, violation and update of auditory predictions play a fundamental role in music perception. The mismatch negativity (MMN) is a brain response that offers a unique insight into these processes. The MMN is elicited by deviants in a series of repetitive sounds and reflects the perception of change in physical and abstract sound regularities. Therefore, it is regarded as a prediction error signal and a neural correlate of the updating of predictive perceptual models. In music, the MMN has been particularly valuable for the assessment of musical expectations, learning and expertise. However, the MMN paradigm has an important limitation: its ecological validity. Most studies use single tones or simple pitch patterns as stimuli failing to represent the complexity of everyday music. This is important since music perception is highly dependent on the statistical regularities of often hugely varying sound contexts. Furthermore, different types of listeners (e.g., jazz vs. classical musicians) might perceive different aspects and types of music in different ways. These nuances are very difficult to capture with current paradigms. Aims: Our main goal is to determine if it is possible to record MMNs in a more ecologically valid and complex musical context. To this aim we will develop a new paradigm using more real-sounding stimuli. Our stimuli will be two-part music excerpts made by adding a melody to a previous design based on the Alberti bass (Vuust et al., 2011). Our second goal is to determine how the complexity of this context affects the predictive processes indexed by the MMN. We will achieve this in two stages. First, we want to establish how the pitch complexity of the melody affects predictions for different features. Thus, we will compare the melody and the bass when presented separately. Second, we want to determine how presenting the melody and the bass together affects MMN responses. To this purpose we will compare the two-part excerpts with the melody and the bass presented individually. Method: We will measure non-musicians responses to deviants (tuning, intensity, timbre and slide) embedded in two-part music while they watch a silent movie. Stimuli will consist of several melodies placed on top of the Alberti bass used previously. The musical excerpts will be randomly transposed to several keys. There will be four different blocks: melody alone, bass alone, melody and bass together and bass in the pitch range of the melody. This last block will be added to control for pitch height confounds. We will use Magnetoencephalography to record MMNs, and magnetic resonance imaging (MRI) to aid source localization. Results: We expect MMNs for all features in all blocks. For the comparison between the melody and the bass, we hypothesize reduced -but still present- MMNs in the melody for pitch related features (tuning, slide). This is because the melody’s pitch complexity is higher and its pitch predictability is lower, which makes pitch related deviants less surprising. Regarding the two-part excerpts, previous studies suggested a reduction of MMN when several streams are heard simultaneously. This might be due to competition for neural resources. In consequence, we expect reduced MMNs in this condition compared to the bass and the melody presented separately. Conclusions: Our study can open the door for testing auditory perception in more realsounding and complex musical contexts. It may lay the ground for the research of more fine-grained questions with different types of listeners such as particular kinds of musicians and cochlear implant users. We hope our efforts promote the interest in the use of more realistic stimuli in music research.
Original languageEnglish
Publication year2 Aug 2017
StatePublished - 2 Aug 2017
EventEuropean Society for Cognitive Sciences Of Music (ESCOM): Expressive Interaction with Music - University of Ghent, Ghent, Belgium
Duration: 1 Aug 20174 Aug 2017
http://www.escom2017.org

Conference

ConferenceEuropean Society for Cognitive Sciences Of Music (ESCOM)
LocationUniversity of Ghent
CountryBelgium
CityGhent
Period01/08/201704/08/2017
Internet address

    Research areas

  • Auditory perception, music perception, magnetoencephalography, multifeature, MMN, ERP

See relations at Aarhus University Citationformats

ID: 125875424