Presentation
Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks
Gesturing influenced by cognitive and linguistic factors
There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.
Poster C108 in Poster Session C, Wednesday, October 25, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
Bradley White1,2, Joseph Palagano1,2, Cryss Padilla1,2, Laura-Ann Petitto1,2; 1Brain and Language Center for Neuroimaging (BL2), 2Gallaudet University
Gestures are all around us, and they help us communicate. We instinctively point to select a pastry or happily wave to get a friend’s attention. Nonetheless, a lack of gesture use can be observed in environments that could facilitate successful communication across cultural contexts (e.g., airports, hospitals, refugees), as well as in spaces that predominantly use sign languages. Using gestures is vital in areas where people come together who do not share a language. We investigated cognitive factors hypothesized to contribute to successful gesture use, including visual sign language experience, working memory, and attention. Using innovative online webcam eye-tracking technology, we studied participants remotely across the USA. Data were collected and analyzed from 26 hearing adults, 12 monolingual non-signers (English only), and 14 bimodal-bilingual signers (English and American Sign Language, ASL). In a 2 x 3 block design, receptive and expressive performance was measured for 3 categories of gestures: high semantic content (e.g., “eating food”, “taking a photo”), some semantic content (e.g., “shame on you”, “thumbs up”), and low semantic content (e.g., “triangle outline”, “circle outline”). We predicted that prior language experience would impact the likelihood of successful use and comprehension of gestures across the 3 gesture categories. Behavioral responses to gesture stimuli were time-locked with online webcam eye-tracking. Behavior and eye gaze area (visual attention area, pixels x pixels) were analyzed with linear mixed-effects statistical modeling in R. Eye gaze density was further analyzed in MATLAB. There were significant main and interaction effects. Both groups were most accurate when perceiving gestures with high semantic content (e.g., “eating food”) and when producing gestures with low semantic content (e.g., “triangle outline”). Signers were less accurate than non-signers when producing gestures with some semantic content (e.g., “shame on you”). For this condition, signers more often produced sign language instead of gestures. Signers were faster to produce responses than non-signers. Signers also used larger, denser visual attention areas than non-signers, except when perceiving gestures with some semantic content (e.g., “shame on you”, no difference). These findings suggest that successful gesture use relies on cognitive and linguistic factors. More semantic content may aid in mapping top-down concept knowledge when perceiving gestures, but it may be more difficult to express these gestures than those with less semantic content. Visual sign language experience yielded faster responses and larger, more dense visual attention areas. However, we observed significant interference of sign language semantics for gestures with only some semantic content. All participants were naïve gesturers; perhaps these outcomes could change with gesture training. These results provide new insight into the human capacity to communicate with gestures. Successful gesture use relies on semantic content (verbal working memory), prior language experience with visual sign language, and visual and executive attention. Therefore, the present work identifies factors that may increase a person’s likelihood to use and comprehend gestures. Ultimately, this knowledge will lead to the creation of optimal gesture learning contexts to best facilitate communication across languages and cultures.
Topic Areas: Signed Language and Gesture,