Presentation
Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Poster Slams
Online paradigms to measure sequential learning
Poster D9 in Poster Session D with Social Hour, Friday, October 7, 5:30 - 7:15 pm EDT, Millennium Hall
This poster is part of the Sandbox Series.
Gabriel Cler1, Jiwon Kim1, Samantha Bartolo1; 1University of Washington
Online data collection has gained popularity due largely to the COVID-19 pandemic, but also to recruit larger and more representative samples. In this project, we discuss online adaptations of two sequential learning tasks. Sequence learning is of interest to language researchers as it recruits language-relevant subcortical structures including the striatum and underlies rule-based aspects of language including phonology and morphosyntax. We recruited adults ages 18–45 with no history of communication concerns. Tasks were scripted in PsychoPy/PsychoJS and run on Pavlovia.org. Online platforms cannot produce reaction times that are interpretable in raw form, as keyboard, operating system, and browser configurations contribute to differences in latency between activating a key and recording the keypress. However, high precision allows for reaction time differences (within participant) to be compared between participants using different computer configurations (Pavlovia <3.5 ms; Bridges et al., 2020). One common sequential learning paradigm is the serial reaction time (SRT) task, in which participants are cued to perform keypresses that are either pseduorandomly ordered or comprise a repeating sequence. Learning is indexed as either the mean reaction time of the final sequence block, or (relevant for online testing) the difference in reaction time between random and sequence blocks. Here participants completed an implicit version of the task using a 10-item sequence following a standard design (Lammertink et al., 2020). However, 9/9 participants reported that keypresses were repeating patterns. To reduce this explicit awareness of the sequence, we added 10-15 pseudorandom keypresses following each two repetitions of the sequence. Fewer participants (5/23; 21%) explicitly noted a sequence. Performance on the task also declined, so no further adjustments were made; initial participants had a motor learning index of 46 ms, while those with interspersed random keypresses had an index of 14 ms. Another paradigm involving implicit sequential learning is a (visual) statistical learning task. In this paradigm, participants watch a stream of shapes that contain repeated triplets. Participants are asked to identify triplets that were or were not in the stream in a two-alternative forced choice (2AFC) test. To promote attention to the stream due to online testing, we implemented a cover task in which participants indicated when they saw repeated items (Turk-Browne et al., 2005). The original version of this task (no cover task, shapes moving around the screen) reported 95% accuracy on the 2AFC test (Fiser & Aslin, 2002). Turk-Browne interspersed two streams of 312 shapes each and reported 59% accuracy on the attended stream (Turk-Browne et al., 2005). Our piloting of the Turk-Browne version showed a mean accuracy of 49% in 5 participants. Simplifying to one stream of 312 shapes and adjusting the presentation time of each stimulus still produced performance around chance. Finally, we doubled the number of stimuli to 624 shapes, for which 25 participants had a mean accuracy of 55%. Online testing will likely continue to be relevant, and further research is needed to explore validity of these and related paradigms for understanding the interrelation between sequence learning and language.
Topic Areas: Methods, Multisensory or Sensorimotor Integration