As the value of the randomized controlled trial (RCT) becomes established, it is time to turn the attention of authors and reviewers to the challenge of conducting, reporting, and evaluating research activities. When the report of an RCT is sent out for review, reviewers must question all aspects of the article to see whether it meets current scientifically accepted criteria.
For starters, is the word “randomized” in the title? The ability to identify a randomized trial in an electronic database depends on how it is indexed. Indexers might not classify a report as a randomized trial if the authors do not explicitly report this information. Regarding the method of randomization, reducing selection bias at the beginning of the trial is important. If 2 treated groups are compared, did the authors explain exactly how all participants were selected? Despite the importance of specific inclusion and exclusion criteria, eligibility criteria are often inadequately reported. And what about sample size? It must be planned carefully, with a balance between orthodontic and statistical considerations. Ideally, the sample will be large enough to have a high probability of detecting as statistically significant a clinically important difference, if one exists. The size of effect deemed important is inversely related to the sample size necessary to detect it; ie, large samples are needed to detect small differences.
It seems like a short time ago, but it was actually December 2005 when the AJO-DO adopted the CONSORT (consolidated standards of reporting trials) and QUOROM (quality of reporting of meta-analyses) statements as policy for reporting RCTs, systematic reviews, and meta-analyses. New evidence has accumulated since then, and new statements are now available for both RCTs and systematic reviews. CONSORT 2010 ( www.consort-statement.org ) incudes 3 new items: proof of registration for controlled trials, complete access to the original protocol of the study, and recognition of all funding sources. To aid reviewers in accurately reviewing a trial’s findings, detailed information is needed. Unfortunately, this is not always possible because many authors neglect to provide lucid and complete descriptions of critical information. This can lead to many phone calls in an attempt to track down the original investigators in the hope that they have a good memory for details. The CONSORT Explanation and Elaboration document explains and illustrates the principles underlying the CONSORT statement. We strongly recommend using this document for important clarifications on all the items.
The rising frustration of trying to accurately evaluate systematic reviews and meta-analyses has led to a new instrument named PRISMA, which stands for preferred reporting items for systematic reviews and meta-analyses. The aim of the PRISMA statement is to give authors an evidence-based minimum set of items to improve the reporting of systematic reviews and meta-analyses. The PRISMA statement consists of a 27-item checklist and a 4-phase flow diagram. It is an evolving document that is subject to change periodically as new evidence emerges. This guideline should help to ensure that all relevant information is included so that better and more informed health decisions can be made. PRISMA will replace the QUOROM statement that has been useful in the evaluation of studies using meta-analysis to group a number of studies in common. We also encourage all participating reviewers to reference the PRISMA statement step by step when reviewing systematic reviews and meta-analyses.
“Systematic reviews and meta-analyses frequently form the basis for clinical decisions, but evidence suggests that in many cases these reviews are conducted and reported poorly,” said Dr David Moher, a senior scientist at Ottawa Hospital Research Institute and associate professor of medicine at the University of Ottawa in Canada. “This guideline will help ensure that all relevant information is included in these reviews, so that better and more informed health decisions can be made.”
Because there might be minor changes in these relatively new instruments over the next few years, make a habit of using the Web sites for the actual forms and instructional guidelines when designing your next study or serving as a reviewer. By referencing this same information on the Web, reviewers will become more effective with each evaluation. As usual, readers of the AJO-DO will benefit from these changes.