west china medical publishers
Keyword
  • Title
  • Author
  • Keyword
  • Abstract
Advance search
Advance search

Search

find Keyword "Pre-trained language model" 2 results
  • A study on multi-class classification of medical questionnaire item texts based on prompt learning

    ObjectiveThe current medical questionnaire resources are mainly processed and organized at the document level, which hampers user access and reuse at the questionnaire item level. This study aims to propose a multi-class classification of items in medical questionnaires in low-resource scenarios, and to support fine-grained organization and provision of medical questionnaires resources. MethodsWe introduced a novel, BERT-based, prompt learning approach for multi-class classification of items in medical questionnaires. First, we curated a small corpus of lung cancer medical assessment items by collecting relevant clinical assessment questionnaires, extracting function and domain classifications, and manually annotating the items with "function-domain" combination labels. We then employed prompt learning by feeding the customized template into BERT. The masked positions were predicted and filled, followed by mapping the populated text to labels. This process enables the multi-class classification of item texts in medical questionnaires. ResultsThe constructed corpus comprised 347 clinical assessment items for lung cancer, across nine "function-domain" labels. The experimental results indicated that the proposed method achieved an average accuracy of 93% on our self-constructed dataset, outperforming the runner-up GAN-BERT by approximately 6%. ConclusionThe proposed method can maintain robust performance while minimizing the cost of building medical questionnaire item corpora, illustrating its promotion value of research and practice in medical questionnaire classification.

    Release date: Export PDF Favorites Scan
  • Research on fault diagnosis of patient monitor based on text mining

    The conventional fault diagnosis of patient monitors heavily relies on manual experience, resulting in low diagnostic efficiency and ineffective utilization of fault maintenance text data. To address these issues, this paper proposes an intelligent fault diagnosis method for patient monitors based on multi-feature text representation, improved bidirectional gate recurrent unit (BiGRU) and attention mechanism. Firstly, the fault text data was preprocessed, and the word vectors containing multiple linguistic features was generated by linguistically-motivated bidirectional encoder representation from Transformer. Then, the bidirectional fault features were extracted and weighted by the improved BiGRU and attention mechanism respectively. Finally, the weighted loss function is used to reduce the impact of class imbalance on the model. To validate the effectiveness of the proposed method, this paper uses the patient monitor fault dataset for verification, and the macro F1 value has achieved 91.11%. The results show that the model built in this study can realize the automatic classification of fault text, and may provide assistant decision support for the intelligent fault diagnosis of the patient monitor in the future.

    Release date: Export PDF Favorites Scan
1 pages Previous 1 Next

Format

Content