Presentation
Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks
Alterations in the Processing of Temporal Features for Speech Associated with Musicianship and Instrument Type
Poster A81 in Poster Session A, Tuesday, October 24, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.
McNeel Jantzen1, Katie Harrison1, K.J. Jantzen1; 1Western Washington University
Musical competence requires precision and as a result musicians are more sensitive to acoustic features such as onset timing and frequency (Patel, 2011; Levitin, 2006). Musical training may enhance the processing of acoustic information for speech sounds as musicians have a more accurate temporal and tonal representation of auditory stimuli than their non-musician counterparts (Kraus & Chandrasekaran, 2010; Parbery-Clark et al., 2009; Zendel & Alain, 2008). Taken together, this suggests that musical training may enhance the processing of acoustic information for speech sounds. While our previous research did not show a musician advantage for discrimination of temporal cues (Huntemer-Silveira et al., 2017; Jantzen et al., 2014; Jantzen & Scheurich, 2014) there was a trend suggesting that string musicians had enhanced performance compared to their wind musician counterparts (Davis et. al., 2015). Moreover, our results did provide evidence that the voiced stimuli had a strong perceptual effect and that musicians were more sensitive to categorical boundary effects. However, the lack of robust results may have been due to the difficulty of the dichotic paradigm used. Therefore, the current study employed a speeded, same-different (AX), discrimination task using pairs of speech stimuli differing in voice onset time along a voiced to voiceless continuum. Subjects rated pairs on a scale from 1 to 7, with 1 being ‘no difference’ and 7 being ‘very different’. Musical training effects and organization of temporal features were reflected in the EEG as observed by location and amplitude of the ERP’s. In addition, behavioral results indicate that the pattern of performance on the difference-rating task varied as a function of instrument type and sensitivity to rapidly changing temporal cues that indicate a possible translation of musical cues into functional linguistic cues. Consistent with previous results (Jantzen et al., 2016), the voiced phoneme acted as a strong perceptual magnet to the voiceless phoneme thereby producing weaker categorical boundaries for the two phonemes. The voiceless phoneme does not contain the dominant voicing feature and therefore produced stronger categorical boundaries; effectively allowing the voiced phoneme to perceptually exist as a stronger and separate category. The clear categorical boundaries along, not just at either end of, the continuum may reflect musicians’ sensitivity to and precise processing of acoustic features of speech due to musical training, an enhanced right hemisphere music network, and an indirect translation of musical cues into functional linguistic cues. Musicians focus on and direct their attention to small changes in acoustical features such as pitch and onset time, thereby developing an acute processing of spectrotemporal acoustical information (Schneideretal, 2002; Marieetal, 2012). However, pitch and onset time are not used to convey the same information in language and music. It is possible that processing these features as musical cues may not translate to how they function in language. Consistent with Patel’s (2011) OPERA Hypothesis, our results suggest ananatomical overlap of neural areas that process acoustic features present in both speech and music. Additionally, musical training requires repetition that continually engages these neural areas and enhances musicians’ left hemisphere language network.
Topic Areas: Speech Perception, Control, Selection, and Executive Processes