In recent years, epileptic seizure detection based on electroencephalogram (EEG) has attracted the widespread attention of the academic. However, it is difficult to collect data from epileptic seizure, and it is easy to cause over fitting phenomenon under the condition of few training data. In order to solve this problem, this paper took the CHB-MIT epilepsy EEG dataset from Boston Children's Hospital as the research object, and applied wavelet transform for data augmentation by setting different wavelet transform scale factors. In addition, by combining deep learning, ensemble learning, transfer learning and other methods, an epilepsy detection method with high accuracy for specific epilepsy patients was proposed under the condition of insufficient learning samples. In test, the wavelet transform scale factors 2, 4 and 8 were set for experimental comparison and verification. When the wavelet scale factor was 8, the average accuracy, average sensitivity and average specificity was 95.47%, 93.89% and 96.48%, respectively. Through comparative experiments with recent relevant literatures, the advantages of the proposed method were verified. Our results might provide reference for the clinical application of epilepsy detection.
Hepatocellular carcinoma (HCC) is the most common liver malignancy, where HCC segmentation and prediction of the degree of pathological differentiation are two important tasks in surgical treatment and prognosis evaluation. Existing methods usually solve these two problems independently without considering the correlation of the two tasks. In this paper, we propose a multi-task learning model that aims to accomplish the segmentation task and classification task simultaneously. The model consists of a segmentation subnet and a classification subnet. A multi-scale feature fusion method is proposed in the classification subnet to improve the classification accuracy, and a boundary-aware attention is designed in the segmentation subnet to solve the problem of tumor over-segmentation. A dynamic weighted average multi-task loss is used to make the model achieve optimal performance in both tasks simultaneously. The experimental results of this method on 295 HCC patients are superior to other multi-task learning methods, with a Dice similarity coefficient (Dice) of (83.9 ± 0.88)% on the segmentation task, while the average recall is (86.08 ± 0.83)% and an F1 score is (80.05 ± 1.7)% on the classification task. The results show that the multi-task learning method proposed in this paper can perform the classification task and segmentation task well at the same time, which can provide theoretical reference for clinical diagnosis and treatment of HCC patients.
Recently, deep learning has achieved impressive results in medical image tasks. However, this method usually requires large-scale annotated data, and medical images are expensive to annotate, so it is a challenge to learn efficiently from the limited annotated data. Currently, the two commonly used methods are transfer learning and self-supervised learning. However, these two methods have been little studied in multimodal medical images, so this study proposes a contrastive learning method for multimodal medical images. The method takes images of different modalities of the same patient as positive samples, which effectively increases the number of positive samples in the training process and helps the model to fully learn the similarities and differences of lesions on images of different modalities, thus improving the model's understanding of medical images and diagnostic accuracy. The commonly used data augmentation methods are not suitable for multimodal images, so this paper proposes a domain adaptive denormalization method to transform the source domain images with the help of statistical information of the target domain. In this study, the method is validated with two different multimodal medical image classification tasks: in the microvascular infiltration recognition task, the method achieves an accuracy of (74.79 ± 0.74)% and an F1 score of (78.37 ± 1.94)%, which are improved as compared with other conventional learning methods; for the brain tumor pathology grading task, the method also achieves significant improvements. The results show that the method achieves good results on multimodal medical images and can provide a reference solution for pre-training multimodal medical images.