I sincerely hope that all subscribers read the AJO-DO from cover to cover. However, as a realist, I know that you probably pick and choose the articles that you read, and will most likely select topics of clinical importance to you as an orthodontist in private practice. So, suppose you decide to read the article in this issue entitled “Effect of fluoridated chewing sticks on white spot lesions in postorthodontic patients.” The purpose of this study was to determine whether using fluoridated chewing sticks 5 times per day for 6 weeks will encourage remineralization of white spot lesions after orthodontic appliance removal. This could be valuable information for any clinician if the product (chewing sticks) is effective. How do you as the reader of this article determine the quality of the evidence presented in this research report?
You might look first at the type of research. The authors stated that this is “a double-blind, randomized, longitudinal trial.” What does that mean? Is this a good research design or a poor research design? Is it likely that there will be bias in allocating the participants to the treatment groups?
Next, you look at the sample size. The authors reported that their sample of 37 subjects was divided into 2 subgroups of 19 and 18 each. Is this a large enough sample? How could you tell? The authors stated that they performed a power analysis before starting the study, with the following assumptions: “significance level of 0.01, a standard deviation of 3.0, at least a detectable difference of 4.0 (based on the DIAGNOdent pen values), and a power for that detection of 90%.” What does this mean? Is a sample size calculation a good thing to do? How does it affect the quality of the evidence?
In the methods section, the authors stated that they used a nonpaired t test to compare the test and control groups. “Since a multiple t test was used, a statistically significant difference of P <0.01 was accepted.” The correlations between scores were analyzed by using the Pearson correlation coefficient. Wonderful. Are these the appropriate tests to achieve the stated purpose of this study? Are there better tests that should have been performed?
Should you simply trust that the referees of this manuscript have made certain that these tests are valid? Or do you as the reader depend on me as the editor-in-chief to only accept and publish studies that have met the scrutiny of critical analysis? What if this information were being presented at a meeting that you were attending. Should you believe the results? How do you determine the quality of the research that you read in scientific articles or observe during oral presentations?
Your decision of what to believe will ultimately affect your patient care. In this example, you must decide whether you should recommend the use of fluoridated chewing sticks to remineralize enamel in your patients who develop white spot lesions during orthodontic therapy. To make the correct decision, you need to know something about statistics and research design. Where could you get that information?
Starting this month, you can learn that information in the AJO-DO . One of my goals as the editor-in-chief is to help readers become better discriminators of research projects, written and oral. As a clinician, it is often difficult to understand the terminology, and much of what is written about statistics and research design does not apply directly to orthodontics.
So, I have established a new monthly feature for the AJO-DO , entitled “Statistics and research design.” The Associate Editor of this section is Dr Nikolaos Pandis from Corfu, Greece. Nick has been a clinical orthodontist for over 20 years and recently completed an MS degree in clinical trials. Each month, he will take a small piece of the topic and, in 2 pages, explain that aspect of statistics or research design, using terminology that clinicians can understand. Here is our goal: if you read this column regularly, you will eventually understand how to determine the quality of the evidence. Enjoy and learn!