west china medical publishers
Keyword
  • Title
  • Author
  • Keyword
  • Abstract
Advance search
Advance search

Search

find Keyword "information fusion" 3 results
  • Study on Electroencephalogram Recognition Framework by Common Spatial Pattern and Fuzzy Fusion

    Common spatial pattern (CSP) is a very popular method for spatial filtering to extract the features from electroencephalogram (EEG) signals, but it may cause serious over-fitting issue. In this paper, after the extraction and recognition of feature, we present a new way in which the recognition results are fused to overcome the over-fitting and improve recognition accuracy. And then a new framework for EEG recognition is proposed by using CSP to extract features from EEG signals, using linear discriminant analysis (LDA) classifiers to identify the user's mental state from such features, and using Choquet fuzzy integral to fuse classifiers results. Brain-computer interface (BCI) competition 2005 data setsⅣa was used to validate the framework. The results demonstrated that it effectively improved recognition and to some extent overcome the over-fitting problem of CSP. It showed the effectiveness of this framework for dealing with EEG.

    Release date: Export PDF Favorites Scan
  • Segmentation Method of Colour White Blood Cell Image Based on HSI Modified Space Information Fusion

    This paper presents a kind of automatic segmentation method for white blood cell based on HSI corrected space information fusion. Firstly, the original cell image is transformed to HSI colour space conversion. Because the transformation formulas of H component piecewise function was discontinuous, the uniformity of uniform visual cytoplasm area in the original image was lead to become lower in this channel. We then modified formulas, and then fetched information of nucleus, cytoplasm, red blood cells and background region according to distribution characteristics of the H, S and I-channel, using the theory and method of information fusion to build fusion imageⅠand fusion imageⅡ, which only contained cytoplasm and a small amount of interference, and fetched nucleus and cytoplasm respectively. Finally, we marked the nucleus and cytoplasm region and obtained the final result of segmentation. The simulation results showed that the new algorithm of image segmentation for white blood cell had high accuracy, robustness and universality.

    Release date: Export PDF Favorites Scan
  • A lightweight recurrence prediction model for high grade serous ovarian cancer based on hierarchical transformer fusion metadata

    High-grade serous ovarian cancer has a high degree of malignancy, and at detection, it is prone to infiltration of surrounding soft tissues, as well as metastasis to the peritoneum and lymph nodes, peritoneal seeding, and distant metastasis. Whether recurrence occurs becomes an important reference for surgical planning and treatment methods for this disease. Current recurrence prediction models do not consider the potential pathological relationships between internal tissues of the entire ovary. They use convolutional neural networks to extract local region features for judgment, but the accuracy is low, and the cost is high. To address this issue, this paper proposes a new lightweight deep learning algorithm model for predicting recurrence of high-grade serous ovarian cancer. The model first uses ghost convolution (Ghost Conv) and coordinate attention (CA) to establish ghost counter residual (SCblock) modules to extract local feature information from images. Then, it captures global information and integrates multi-level information through proposed layered fusion Transformer (STblock) modules to enhance interaction between different layers. The Transformer module unfolds the feature map to compute corresponding region blocks, then folds it back to reduce computational cost. Finally, each STblock module fuses deep and shallow layer depth information and incorporates patient's clinical metadata for recurrence prediction. Experimental results show that compared to the mainstream lightweight mobile visual Transformer (MobileViT) network, the proposed slicer visual Transformer (SlicerViT) network improves accuracy, precision, sensitivity, and F1 score, with only 1/6 of the computational cost and half the parameter count. This research confirms that the proposed algorithm model is more accurate and efficient in predicting recurrence of high-grade serous ovarian cancer. In the future, it can serve as an auxiliary diagnostic technique to improve patient survival rates and facilitate the application of the model in embedded devices.

    Release date: Export PDF Favorites Scan
1 pages Previous 1 Next

Format

Content