• 1. School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin 300072, P. R. China;
  • 2. Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin 300072, P. R. China;
GAO Feng, Email: gaofeng@tju.edu.cn
Export PDF Favorites Scan Get Citation

In the field of brain-computer interfaces (BCIs) based on functional near-infrared spectroscopy (fNIRS), traditional subject-specific decoding methods suffer from the limitations of long calibration time and low cross-subject generalizability, which restricts the promotion and application of BCI systems in daily life and clinic. To address the above dilemma, this study proposes a novel deep transfer learning approach that combines the revised inception-residual network (rIRN) model and the model-based transfer learning (TL) strategy, referred to as TL-rIRN. This study performed cross-subject recognition experiments on mental arithmetic (MA) and mental singing (MS) tasks to validate the effectiveness and superiority of the TL-rIRN approach. The results show that the TL-rIRN significantly shortens the calibration time, reduces the training time of the target model and the consumption of computational resources, and dramatically enhances the cross-subject decoding performance compared to subject-specific decoding methods and other deep transfer learning methods. To sum up, this study provides a basis for the selection of cross-subject, cross-task, and real-time decoding algorithms for fNIRS-BCI systems, which has potential applications in constructing a convenient and universal BCI system.

Citation: ZHANG Yao, LIU Dongyuan, GAO Feng. A deep transfer learning approach for cross-subject recognition of mental tasks based on functional near-infrared spectroscopy. Journal of Biomedical Engineering, 2024, 41(4): 673-683. doi: 10.7507/1001-5515.202310002 Copy

  • Previous Article

    The supernumerary robotic limbs of brain-computer interface based on asynchronous steady-state visual evoked potential
  • Next Article

    Visual object detection system based on augmented reality and steady-state visual evoked potential