The aging population and the increasing prevalence of chronic diseases in the elderly have brought a significant economic burden to families and society. The non-invasive wearable sensing system can continuously and real-time monitor important physiological signs of the human body and evaluate health status. In addition, it can provide efficient and convenient information feedback, thereby reducing the health risks caused by chronic diseases in the elderly. A wearable system for detecting physiological and behavioral signals was developed in this study. We explored the design of flexible wearable sensing technology and its application in sensing systems. The wearable system included smart hats, smart clothes, smart gloves, and smart insoles, achieving long-term continuous monitoring of physiological and motion signals. The performance of the system was verified, and the new sensing system was compared with commercial equipment. The evaluation results demonstrated that the proposed system presented a comparable performance with the existing system. In summary, the proposed flexible sensor system provides an accurate, detachable, expandable, user-friendly and comfortable solution for physiological and motion signal monitoring. It is expected to be used in remote healthcare monitoring and provide personalized information monitoring, disease prediction, and diagnosis for doctors/patients.
Sleep stage classification is essential for clinical disease diagnosis and sleep quality assessment. Most of the existing methods for sleep stage classification are based on single-channel or single-modal signal, and extract features using a single-branch, deep convolutional network, which not only hinders the capture of the diversity features related to sleep and increase the computational cost, but also has a certain impact on the accuracy of sleep stage classification. To solve this problem, this paper proposes an end-to-end multi-modal physiological time-frequency feature extraction network (MTFF-Net) for accurate sleep stage classification. First, multi-modal physiological signal containing electroencephalogram (EEG), electrocardiogram (ECG), electrooculogram (EOG) and electromyogram (EMG) are converted into two-dimensional time-frequency images containing time-frequency features by using short time Fourier transform (STFT). Then, the time-frequency feature extraction network combining multi-scale EEG compact convolution network (Ms-EEGNet) and bidirectional gated recurrent units (Bi-GRU) network is used to obtain multi-scale spectral features related to sleep feature waveforms and time series features related to sleep stage transition. According to the American Academy of Sleep Medicine (AASM) EEG sleep stage classification criterion, the model achieved 84.3% accuracy in the five-classification task on the third subgroup of the Institute of Systems and Robotics of the University of Coimbra Sleep Dataset (ISRUC-S3), with 83.1% macro F1 score value and 79.8% Cohen’s Kappa coefficient. The experimental results show that the proposed model achieves higher classification accuracy and promotes the application of deep learning algorithms in assisting clinical decision-making.