Objective To explore the methodological characteristics of Chinese clinical practice guidelines/expert consensus based on usage of GRADE. MethodsCNKI, PubMed, WanFang Data databases, and Medlive.cn were electronically searched to collect Chinese clinical practice guidelines/expert consensus over the past 11 years from January 1st 2010 to December 31st 2020. Four reviewers independently extracted data according to the content of appraisal of guidelines quality evaluation tool AGREE Ⅱ. The clinical practice guidelines/expert consensus were divided into two groups based on whether GRADE was used or not. The changes and development of methodological quality in the past 11 years were explored between the two groups. ResultsIn recent years, the number of clinical practice guidelines/expert consensus which used the GRADE in China had increased annually. The practice guidelines/expert consensus which did not use GRADE had lower methodology quality (P<0.01). ConclusionsThe use of GRADE in clinical practice guidelines/expert consensus requires improvement, and mastering GRADE methodology can effectively improve the methodological quality of the clinical practice guidelines/expert consensus.
ObjectiveTo analyze the status of real world studies (RWS) through registration information of the Chinese Clinical Trials Registry (ChiCTR). MethodsThe website of ChiCTR was searched with the real world as the search term to collect relevant registered items in the real world from inception to May 4, 2022. Descriptive analysis method was used. ResultsA total of 642 registered items were included. The median sample size was 482 cases. RWS were mainly observational studies, and the number of intervention studies was increasing year by year. There were 267 studies (41.59%) at the stage of post-marketing drugs or phase Ⅳ clinical trials. Most of the main measures were endpoints (56.23%), and the most commonly used was overall survival (15.79%). 62.15% of the registered projects met the minimum requirements for registration. ConclusionThe number of RWS registered by ChiCTR shows an increasing trend. At present, the research purpose of RWSs is unclear, and the completeness of registered studies and the overall content compliance of the studies need to be improved.
Systematic review (SR) and meta-analysis, as the highest level of evidence-based medicine, are an indispensable part of guiding medical staff to make medical decisions. At the same time, the status of patients as shared decision-making is rising. At present, the results of SR and meta-analysis are mainly presented in the form of effect (relative risk or mean difference) and forest plot. The expression is not intuitive or professional. The process of evidence-based evidence guiding clinical decision-making lags behind, which cannot meet the needs of rapid decision-making. With the continuous progress in artificial intelligence and big data analysis tools, researchers have attempted to introduce visual presentations to improve the timeliness of clinical decision-making. Through the interpretation of the outcomes of SR and meta-analysis, this paper presents different visualization results from the perspective of patients and clinical decision-makers, which not only helps the majority of people without medical background understand clinical evidence more intuitively and participate in the process of clinical decision-making, but also helps improve residents' health literacy, promotes the dissemination and sharing of knowledge, and provides references for further promoting the technology of automatic decision-making system.
Objective The ultimate goal of developing guidelines is for using them in clinical practice. In this study, an implementation evaluation tool was developed to promote the overall evaluation of guidelines and to improve their promotion and implementation. Methods The research group set up a team to formulate and establish a guideline implementation evaluation tool, through preliminary research, interviews, a systematic review of relevant literature, two expert consensus meetings and two Delphi expert consensus meetings to evaluate the guideline implementation tool. Experts were invited to give opinions and grades on the fields, items and overall implementation evaluation method of the tool. Results The evaluation tool for the implementation of guidelines included 5 fields, accessibility, communicability, performability, recognizability and applicability, with a total of 7 items. The scale-level CVIs in two rounds of Delphi expert consensus were 0.91 and 0.93. We collected opinions and suggestions and made some revisions and insertions without deleting any items based on the parameter that no items fulfilled the standard if mean <3.5, coefficient of variation >15% and I-CVI<0.78. Conclusion In this study, in order to provide a standard and method for the evaluation of guideline implementation, a guideline implementation evaluation tool has been developed and evaluated by clinically-related physicians and guideline formulation methodology experts. The guideline implementation evaluation tool presents satisfactory face and content validity. Empirical research is needed to verify the tool’s performance in evaluating guideline implementation.