Do you believe in magic?

“Jet Stream” is marketed as a novel way to enhance the rate of tooth movement and decrease root resorption. Jet Stream delivers a column of hot air to the mouth as the patient holds a narrow straw between his or her incisors. In a recently published prospective study, the investigators claimed that the prolonged increase in oral temperature induces the periodontal ligament cells to dramatically expedite bone turnover rates with less undermining resorption. What an innovative technique! But does it work? Your brief literature search revealed a study that concluded that Jet Stream is indeed effective in decreasing root resorption and enhancing tooth movement.

Soon after you contact the Jet Stream representative to further explore this product, you encounter a new study, printed in a reputable journal, that contradicts the original article. This study concluded that Jet Stream has no effect on resorption or the rate of tooth movement. Both studies used statistical analyses with an adequate sample size but reached totally different conclusions. Now you don’t know what to believe. Is Jet Stream worth your attention—and the additional cost and effort for your patients?

Contradictions are frequent among investigative reports, especially clinical trials. Most of us recall reading or hearing examples of apparently sound investigations that led to opposing conclusions, undermining our confidence in their credibility. Two hallmark characteristics of sound investigations are reliability and validity. Reliability is the repeatability or the consistency of the results and measurements. Validity is an indication that the investigation measures what it is supposed to measure. A study must be reliable for it to be considered valid.

Although all measurement procedures involve some degree of error, the level of a study’s reliability is variable. This affects its reproducibility. An interesting publication involves 270 collaborators who replicated 100 studies from the psychological literature. Experimental design, methods, and data analysis were evaluated. Significance and P values, as well as meta-analyses of each experimental effect, were considered. Also included were the subjective assessments of the investigators who replicated the experiments. The authors found that less than half of the replicated studies confirmed the results of the original studies. Although 97% of the original investigations showed statistically significant results, only 36% of the replicated studies yielded the same confidence level. Almost half of the effects in the original studies had P values reaching the 95% confidence level, but 39% in the replicated group attained this mark.

Several factors contribute to the reproducibility of investigative findings; some are within the control of the investigator, and some are not. The most reproducible investigations are those with the strongest statistical evidence. For example, strong P values are associated with highly reproducible studies. The expertise, stature, and acclaim of the investigators have little to do with the reproducibility of an investigation.

The poor correlation between the original and the replicated studies may be due to inadvertent errors at all levels of the experimental process. Selective reporting, selective analysis, or lack of explanation of the conditions necessary to achieve significant results, however, often preclude a precise replication of a project. Such intentional activities challenge veracity in research and can be deceptive.

Editors, reviewers, and readers are most attracted to novel discoveries or techniques. Prized publications often describe these accomplishments. In the clinical arena, articles that depict innovative treatment modalities may involve selective reporting. Presentations by authoritative figures who display only successful cases (“cherry picking”) can be misleading but very convincing. Proponents—or opponents—of specific techniques may not be supported by reputable investigations that are generated by controlled studies.

Investigative reporting is the cornerstone of evidence-based orthodontic therapy and potentially affects our level of health care delivery. Vigilant journal reviewers as well as mastery of the classical and contemporary literature are necessary to prevent misinformation generated by studies that are impossible to reproduce—and might be too good to be true. If something sounds like magic, it just may be.

Only gold members can continue reading. Log In or Register to continue

Apr 4, 2017 | Posted by in Orthodontics | Comments Off on Do you believe in magic?
Premium Wordpress Themes by UFO Themes