Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Poster Slams

Is time on our side? A web-based and fMRI investigation of offline speech-motor learning

Poster A70 in Poster Session A, Thursday, October 6, 10:15 am - 12:00 pm EDT, Millennium Hall
This poster is part of the Sandbox Series.

Anne L. van Zelst1, Rahulkrishna Gurram Thimmugari1, F. Sayako Earle1; 1University of Delaware

Offline periods, whether they are spent in wakeful rest or in sleep, promote the memory consolidation of newly learned motor skills. However, existing models of speech-motor learning focus on active, online practice and the protracted development of the movement component of the speech-motor representation. van Zelst & Earle (2020) hypothesized that components of the speech-motor representation may exhibit distinct offline learning time courses. Here, we present behavioral findings that support this novel framework. We will also present preliminary neuroimaging evidence; collection of this dataset is currently underway. Together these datasets provide new evidence demonstrating the possible neurobiological mechanisms of speech-motor learning. In the web-based behavioral experiment, we examined the effects of offline time on the speech-motor production of a trained vowel contrast. Forty-five typical native speakers of American English aged 18 to 25-years were recruited. Learners logged on between 8:00-9:00AM or 8:00-9:00PM and trained in the production of two non-native Danish vowels, [y] and [ø]. Participants completed baseline assessments and sleep habit questionnaires, followed by the speech sound production training. They then had a 12-hour delay with (SLEEP) or without nocturnal sleep (REST) or proceeded immediately (IMMEDIATE) to a post-training production assessment of Danish vowels in trained ([V]) and untrained ([hVd]) contexts. F1-F2-F3 formants were measured using Praat. Movement accuracy was measured by the Euclidean distances between the F1-F2-F3 values in participant productions recorded during the pre and post assessments, against that of the Danish speaker model. To obtain perceptual ratings of the productions, 29 native Danish speakers were recruited as participants in a web-administered perceptual identification task of the American English speakers’ productions. Group differences were analyzed via linear mixed-effects analyses. With Euclidean distance as the dependent variable, the model contained a three-way interaction term between Delay, Time, and Training, and a separate three-way interaction term between Sleep, Time, and Training, with the time of day of training, and Vowel ([y], [ø]) entered as covariates, and Participant as a random effect. Similarly, listener identification accuracy of the target vowel productions was modeled using a logistic generalized mixed-effects model with the same interaction terms and the Experiment 1 participant (talker) and the Experiment 2 participant (listener) modeled as random effects. These analyses yielded two novel findings. First, a post-practice period that includes nocturnal sleep improved speech-motor movement accuracy without additional practice. Second, the perceptual identification accuracy of native Danish speakers was greatest for the vowel productions of talkers who slept. In the fMRI experiment, while completing in-scanner tasks adapted from the behavioral experiment, participants are scanned immediately following training (IMMEDIATE) or after a 12-hour delay with (SLEEP) or without (REST) nocturnal sleep. After acquiring a 3D T1-weighted anatomical dataset, event-related functional data (45 axial slices of 2mm-thick echo planar images acquired with a multi-band accelerated sequence) are obtained in three runs. We predict that the REST group will evidence increased hippocampal activation, whereas those in our SLEEP group will show increased cortical activation. Neuroimaging will supplement our behavioral findings while highlighting the neurobiological mechanism underlying the establishment of speech-motor representations.

Topic Areas: Speech Motor Control, Speech Perception