Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions

Enhanced BCI speller with hybrid framework

Poster Session C, Friday, October 25, 4:30 - 6:00 pm, Great Hall 3 and 4

Myung Yung Jeong1, JunHyun Kim2, Mailk M Naeem Mannan3; 1Pusan National University

Steady-state visual evoked potentials (SSVEPs) are extensively used in brain-computer interfaces (BCIs) due to their robustness, high classification accuracy, and impressive information transfer rates (ITRs). However, multiple flickering stimuli can cause significant user discomfort, including tiredness and fatigue. To address these issues, we propose a novel hybrid speller design integrating electroencephalography (EEG) and eye-tracking to enhance user comfort while managing numerous flickering stimuli. Our BCI speller reduces the number of required frequencies by using only six different frequencies to classify forty-eight targets, significantly improving ITR efficiency. A canonical correlation analysis (CCA)-based framework supports the system’s effectiveness, accurately identifying target frequencies with just 1 second of flickering signal. In tests, our system demonstrated an average classification accuracy about 90% in cued-spelling tasks. It achieved an average ITR of 184.06 ± 12.761 bits per minute in cued-spelling and 190.73 ± 17.849 bits per minute in free-spelling tasks. These results highlight the superior performance of our hybrid speller compared to existing spellers in terms of target classification, accuracy, and ITR. By integrating eye-tracking technology with EEG, the cognitive load on users is reduced, and the intended target is identified more precisely. This hybrid approach leverages the strengths of both EEG and eye-tracking, creating a more responsive and user-friendly BCI system. Our speller’s performance in both cued and free-spelling tasks demonstrates its practical applicability and potential for real-world use. High ITR and classification accuracy enable faster and more accurate communication, especially beneficial for individuals with severe physical disabilities relying on BCIs for communication. Acknowledgement This study was supported by the Institute for Defense Technology Planning and Advancement for their support through the Defense Venture Innovation Technology Support Project (V220021) and National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. 2022R1A2B5B01002377) REFERENCES 1. Vidal, J.J. Toward direct brain-computer communication. Annu. Rev. Biophys. Bioeng. 1973, 2, 157–180. 2. Farwell, L.A.; Donchin, E. Talking off the top of your head: Toward amentalprosthesis utilizing event-related brain potentials. Electroencephalogr. Clin. Neurophysiol. 1988, 70, 510–523. 3. Li, Y.; Pan, J.; Long, J.; Yu, T.; Wang, F.; Yu, Z.; Wu, W. Multimodal BCIs: Target detection, multidimensional control, and awareness evaluation in patients with disorder of consciousness. Proc. IEEE 2016, 104, 332–352. 4. Xie, J.; Xu, G.; Luo, A.; Li, M.; Zhang, S.; Han, C.; Yan, W. The Role of Visual Noise in Influencing Mental Load andFatigue in a Steady-State Motion Visual Evoked Potential-Based Brain-Computer Interface. Sensors 2017, 17, 1873.

Topic Areas: Writing and Spelling, Multisensory or Sensorimotor Integration

SNL Account Login


Forgot Password?
Create an Account

News

Abstract Submissions extended through June 10

Meeting Registration is Open

Make Your Hotel Reservations Now

2024 Membership is Open

Please see Dates & Deadlines for important upcoming dates.