Presentation
Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks
Establishing psychoperceptual profiles of acoustic processing: Beyond the gradient and categorical distinction
There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.
Poster C84 in Poster Session C, Wednesday, October 25, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.
Lindy Comstock1, Scott Johnson2, M. Florencia Assaneo3; 1Semel Institute for Neuroscience and Human Behavior, UCLA, 2Psychology Department, UCLA, 3Instituto de Neurobiología, UNAM
Research by Assaneo et al. [1-3] has identified two groups within the general population that display physiological differences in white matter structure that predict relative skill at audio-motor synchronization on a verbal entrainment task. Individuals who spontaneously synchronize their verbal output to an external rhythm possess a superior ability to correctly segment syllables in a word-learning task designed to mimic naturalistic language acquisition. These findings imply a link between auto-motor synchronization and word segmentation abilities. However, children attend to multiple cues for word segmentation [4,5]. In addition to stress timing, which corresponds to the entrainment task, prosodic features such as pitch accents and boundary tones [6,7], as well as an array of phonetic cues [8,9] interact to provide key information about where word boundaries lie. Similarly, variation in white matter structure has been proposed to explain individual differences in phonological processing [10-11] and pitch discrimination [12,13]. In this Sandbox series abstract, we investigate whether high synchronizers exhibit a parallel ability to accurately perceive other acoustic dimensions that might contribute to their ability to detect word boundaries. Importantly, we sought to better understand how distinct psychoperceptual abilities arising from physiological variation in brain structure might collectively give rise to different cognitive styles of language processing. The literature distinguishes a “gradient” cognitive style (veridical encoding of phonetic acoustic features), as opposed to a “discrete” style (encoding acoustic variation as phonemic categorical representations) [14]. More recently, this two-way distinction has been problematized by revealing partial convergence between styles: variation occurs on each scale, such that individuals may be proficient or poor at both types of encoding, or perform well in phonetic but not phonemic encoding [15]. In effort to tie together the literature on individual differences in processing mechanisms, we ask how synchronization abilities may pattern with gradient and discrete processing styles. Participants (N = 33; N = 90 anticipated) performed a series of tasks to measure (i) synchronization: participants listened to a rhythmic train of syllables and concurrently whispered the syllable ‘tah’; (ii) discrimination: a two-alternative forced choice test for pitch contours (‘rising’, ‘falling’) and vowel categories (‘same’, ‘different’); categorization: a two-alternative forced choice test for pitch contours (‘high’, ‘low’; 170-230 Hz; 10 Hz steps) and vowel categories (/ɛ/-/ɑ/; 0-400 Hz; 3 steps in F1/F2). Simultaneous EEG data was collected during passive listening of the audio stimulus files from the behavioral tests. Analysis steps include (i) group-level outcomes measures for all tests, (ii) computation of the phase-locking value for the envelopes around the stimulus syllable rates, (ii) individual-level measurement of F1/F2 values and pitch height for elicited phonetic and pitch stimuli, (iii) a mixed effects regression model comparing performance between measures; and (iv) a k-means cluster analysis of outcome scores. Preliminary findings support two bimodal distributions: (i) high and low synchronizers, and (ii) superior performance at either vowel category or pitch discrimination. However, better synchronization appears to occur when participant performance in the discrimination task is balanced across both acoustic categories. Additional analyses (elicitation data, EEG data, categorization tests) are pending.
Topic Areas: Speech Perception, Multisensory or Sensorimotor Integration