Scientific and rigorous study design could improve the reliability of results of the comparative diagnostic test accuracy studies. The design procedures of a comparative diagnostic test accuracy study included: constructing the clinical questions, identifying the appropriate gold standard, selecting the representative patient sample, calculating the sample size, blindly interpreting and comparing the results of diagnostic tests, and setting up the cut-off value. This paper introduced 5 categories of the designs of comparative diagnostic test accuracy studies: fully paired, partially paired with a random subset, partially paired with a nonrandom subset, unpaired randomized, and unpaired nonrandomized design.
Comparative diagnostic test accuracy study, a type of diagnostic accuracy test, aims to compare accuracy of two or more index tests in a study. The application of GRADE in comparative test accuracy differs from single test accuracy, mainly including the selection of appropriate comparative study designs, additional criteria for judging risk of bias, and the consequences of using comparative measures of test accuracy. The study focuses on basic principles and methods of GRADE approach in systematic reviews of comparative test accuracy to promote the understanding and application of the method by domestic scholars.
The comparative diagnostic test accuracy (CDTA) study is an important part of diagnostic test accuracy, which aims to compare the accuracy of two or more index tests in the same study. With the development of CDTA studies and the methodology of systematic reviews, the number of CDTA systematic reviews has grown year by year and has provided evidence to support clinical decision-making. Compared with systematic review of single diagnostic test accuracy, the CDTA systematic review has its own unique features, especially in data extraction, risk of bias, and statistical analysis. This paper introduced the steps and precautions for writing a CDTA systematic review to provide references for CDTA systematic reviewers.