Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions

Disentangling auditory and audiovisual speech perception during movie viewing using optical brain imaging

Poster Session B, Friday, October 25, 10:00 - 11:30 am, Great Hall 3 and 4

Jonathan Peelle1, Aahana Bajracharya2, Arefeh Sherafati3, Michael Jones2, Emily Milarachi4, Noel Dwyer2, Adam Eggebrecht2, Tamara Hershey2, Jill Firszt2, Joseph Culver2; 1Northeastern University, 2Washington University in Saint Louis, 3UCSF, 4Penn State

Human interaction involves communicating in busy environments, conversations that provide context, and the ability to see a talker’s mouth while they speak. However, many of these features are missing from laboratory studies of speech processing. The goal of the current study was to better understand everyday communication using a movie, which mimics many of the same cues found in natural environments. In two experiments, participants viewed ~10 minutes from the movie The Good, the Bad, and the Ugly (1966). Their only task was to watch attentively. To measure regional brain activity we used high-density diffuse optical tomography (HD-DOT). HD-DOT is free of acoustic noise, and permissible for people with implanted medical devices, making it well suited for studies of naturalistic listening in a variety of participants. Our HD-DOT system has 96 sources and 92 detectors, providing good coverage over large portions of the occipital, temporal, and frontal lobes. In Experiment 1 we examined responses to auditory-only and audiovisual speech in a publicly available data set containing data from 58 adults. We manually identified speech events in the movie clip, additionally classifying each as auditory-only speech (the speaker’s mouth was not visible) or auditory-visual speech (the speaker’s mouth was visible). We created regressors for the two speech types using a canonical hemodynamic response function, which we entered in a whole-brain GLM. We found activity for auditory-only speech strongest in the superior temporal lobes, with activity related to audiovisual speech showing increases in visual cortex and along the right lateral temporal lobe. In Experiment 2 we followed a similar analysis plan to compare responses between a group of 18 adults with a cochlear implant and a group of 18 controls with good hearing. We found that listeners with cochlear implants showed significantly more activity in left dorsolateral prefrontal cortex than did listeners with normal hearing, consistent with prior work using more standard laboratory paradigms. In conclusion, we use optical brain imaging to isolate responses to auditory-only and auditory-visual speech during movie viewing. In participants with good hearing, we found expected modality-preferential responses in auditory and visual cortex. In listeners with cochlear implants, we found additional activity in left prefrontal cortex, consistent with increased cognitive demand during listening. These findings lend further support for the hypothesis that the brain regions supporting successful comprehension depend on moment-by-moment fluctuations in the modality of information being processed, as well as the acoustic clarity of the speech signal.

Topic Areas: Multisensory or Sensorimotor Integration, Speech Perception

SNL Account Login


Forgot Password?
Create an Account

News

Abstract Submissions extended through June 10

Meeting Registration is Open

Make Your Hotel Reservations Now

2024 Membership is Open

Please see Dates & Deadlines for important upcoming dates.