Your browser doesn't support javascript.
loading
EEG-VTTCNet: A loss joint training model based on the vision transformer and the temporal convolution network for EEG-based motor imagery classification.
Shi, Xingbin; Li, Baojiang; Wang, Wenlong; Qin, Yuxin; Wang, Haiyan; Wang, Xichao.
Afiliación
  • Shi X; The School of Electrical Engineering, Shanghai Dianji University, Shanghai 201306, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai 201306, China.
  • Li B; The School of Electrical Engineering, Shanghai Dianji University, Shanghai 201306, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai 201306, China. Electronic address: libj@sdju.edu.cn.
  • Wang W; The School of Electrical Engineering, Shanghai Dianji University, Shanghai 201306, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai 201306, China.
  • Qin Y; The School of Electrical Engineering, Shanghai Dianji University, Shanghai 201306, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai 201306, China.
  • Wang H; The School of Electrical Engineering, Shanghai Dianji University, Shanghai 201306, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai 201306, China.
  • Wang X; The School of Electrical Engineering, Shanghai Dianji University, Shanghai 201306, China; Intelligent Decision and Control Technology Institute, Shanghai Dianji University, Shanghai 201306, China.
Neuroscience ; 556: 42-51, 2024 Sep 25.
Article en En | MEDLINE | ID: mdl-39103043
ABSTRACT
Brain-computer interface (BCI) is a technology that directly connects signals between the human brain and a computer or other external device. Motor imagery electroencephalographic (MI-EEG) signals are considered a promising paradigm for BCI systems, with a wide range of potential applications in medical rehabilitation, human-computer interaction, and virtual reality. Accurate decoding of MI-EEG signals poses a significant challenge due to issues related to the quality of the collected EEG data and subject variability. Therefore, developing an efficient MI-EEG decoding network is crucial and warrants research. This paper proposes a loss joint training model based on the vision transformer (VIT) and the temporal convolutional network (EEG-VTTCNet) to classify MI-EEG signals. To take advantage of multiple modules together, the EEG-VTTCNet adopts a shared convolution strategy and a dual-branching strategy. The dual-branching modules perform complementary learning and jointly train shared convolutional modules with better performance. We conducted experiments on the BCI Competition IV-2a and IV-2b datasets, and the proposed network outperformed the current state-of-the-art techniques with an accuracy of 84.58% and 90.94%, respectively, for the subject-dependent mode. In addition, we used t-SNE to visualize the features extracted by the proposed network, further demonstrating the effectiveness of the feature extraction framework. We also conducted extensive ablation and hyperparameter tuning experiments to construct a robust network architecture that can be well generalized.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Redes Neurales de la Computación / Electroencefalografía / Interfaces Cerebro-Computador / Imaginación Límite: Humans Idioma: En Revista: Neuroscience Año: 2024 Tipo del documento: Article País de afiliación: China Pais de publicación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Redes Neurales de la Computación / Electroencefalografía / Interfaces Cerebro-Computador / Imaginación Límite: Humans Idioma: En Revista: Neuroscience Año: 2024 Tipo del documento: Article País de afiliación: China Pais de publicación: Estados Unidos