Recently, deep neural networks (DNNs) have been widely used in the field of electrocardiogram (ECG) signal classification, but the previous models have limited ability to extract features from raw ECG data. In this paper, a deep residual network model based on pyramidal convolutional layers (PC-DRN) was proposed to implement ECG signal classification. The pyramidal convolutional (PC) layer could simultaneously extract multi-scale features from the original ECG data. And then, a deep residual network was designed to train the classification model for arrhythmia detection. The public dataset provided by the physionet computing in cardiology challenge 2017(CinC2017) was used to validate the classification experiment of 4 types of ECG data. In this paper, the harmonic mean F1 of classification accuracy and recall was selected as the evaluation indexes. The experimental results showed that the average sequence level F1 (SeqF1) of PC-DRN was improved from 0.857 to 0.920, and the average set level F1 (SetF1) was improved from 0.876 to 0.925. Therefore, the PC-DRN model proposed in this paper provided a promising way for the feature extraction and classification of ECG signals, and provided an effective tool for arrhythmia classification.
Objective To recognize the different phases of Korotkoff sounds through deep learning technology, so as to improve the accuracy of blood pressure measurement in different populations. Methods A classification model of the Korotkoff sounds phases was designed, which fused attention mechanism (Attention), residual network (ResNet) and bidirectional long short-term memory (BiLSTM). First, a single Korotkoff sound signal was extracted from the whole Korotkoff sounds signals beat by beat, and each Korotkoff sound signal was converted into a Mel spectrogram. Then, the local feature extraction of Mel spectrogram was processed by using the Attention mechanism and ResNet network, and BiLSTM network was used to deal with the temporal relations between features, and full-connection layer network was applied in reducing the dimension of features. Finally, the classification was completed by SoftMax function. The dataset used in this study was collected from 44 volunteers (24 females, 20 males with an average age of 36 years), and the model performance was verified using 10-fold cross-validation. Results The classification accuracy of the established model for the 5 types of Korotkoff sounds phases was 93.4%, which was higher than that of other models. Conclusion This study proves that the deep learning method can accurately classify Korotkoff sounds phases, which lays a strong technical foundation for the subsequent design of automatic blood pressure measurement methods based on the classification of the Korotkoff sounds phases.
In the field of brain-computer interfaces (BCIs) based on functional near-infrared spectroscopy (fNIRS), traditional subject-specific decoding methods suffer from the limitations of long calibration time and low cross-subject generalizability, which restricts the promotion and application of BCI systems in daily life and clinic. To address the above dilemma, this study proposes a novel deep transfer learning approach that combines the revised inception-residual network (rIRN) model and the model-based transfer learning (TL) strategy, referred to as TL-rIRN. This study performed cross-subject recognition experiments on mental arithmetic (MA) and mental singing (MS) tasks to validate the effectiveness and superiority of the TL-rIRN approach. The results show that the TL-rIRN significantly shortens the calibration time, reduces the training time of the target model and the consumption of computational resources, and dramatically enhances the cross-subject decoding performance compared to subject-specific decoding methods and other deep transfer learning methods. To sum up, this study provides a basis for the selection of cross-subject, cross-task, and real-time decoding algorithms for fNIRS-BCI systems, which has potential applications in constructing a convenient and universal BCI system.