How to Avoid Being Misled by Clinical Studies’ Results in Dentistry

Chapter 10. How to Avoid Being Misled by Clinical Studies’ Results in Dentistry

Alonso Carrasco-Labra, D.D.S., M.Sc., Ph.D.; Romina Brignardello-Petersen, D.D.S., M.Sc., Ph.D.; Amir Azarpazhooh, D.D.S., M.Sc., Ph.D.; Michael Glick, D.M.D.; and Gordon H. Guyatt, M.D., M.Sc.

Introduction

In previous chapters in this book, we presented the process and main principles of evidence-based dentistry (EBD);1 how to search for evidence;2 and how to use articles about therapy,3 harm,4 diagnosis,5 systematic reviews,6 clinical practice guidelines,7 qualitative studies,8 and economic evaluations.9 In this chapter of the book, we offer clinicians guidance on how to avoid being misled by biased interpretations of study results.

Academic competition and conflict of interest have fueled misleading presentations of research findings published in peer-reviewed journals. Regardless of whether a researcher works in academia or in the pharmaceutical industry, there is always a personal interest and a rising pressure to succeed and to provide novel and exciting findings; this pressure often results in interpretations of findings that are far more enthusiastic than the data warrant.10

In the area of psychopharmacology, for example, the investigators of 90% to 98% of industry-funded primary studies comparing two drugs reported results that favored the drug produced by their company, particularly when the active comparator drug was a rival product.11 This situation is not exclusive to primary studies. The investigators of industry-sponsored systematic reviews are less transparent regarding their methods, are less rigorous in their risk of bias assessment, and provide more favorable conclusions toward the study sponsor’s drug than are the investigators of reviews that have not been funded by the investigators’ industry.12 When companies employ ghostwriters to produce manuscripts under the names of credible and often well-known researchers, the reported results are likely to be overly favorable.13

The involvement of members of a specific industry is not necessary for overenthusiastic interpretations of results. Academic investigators also are subject to the global industry of producing research evidence. The reward system in science involves receiving grants and having research results published, and scientists may believe that overplaying the significance of their work is a requirement for success.14

Although guidance and tools to help clinicians recognize study results that have a high risk of bias are widely available,15,16 researchers have made limited efforts to facilitate the identification of distorted interpretations and misleading presentations of the results of clinical studies. We present the following examples not to criticize investigators, but to illustrate the need to increase awareness among clinicians and encourage them to avoid putting excessive trust in investigators’ interpretations of their findings.

Guidance on How to Avoid Being Misled by the Results of Clinical Studies

We present seven criteria that dental professionals can follow to avoid being misled by the results of clinical studies.17 We illustrate each criterion with a real example from the dental literature, shown in the boxes after each section.

1. Read only the methods and results sections; disregard the inferences.

In not only the discussion but also in the conclusion and the introduction sections of research articles, investigators may provide inferences that differ from those that a less conflicted or involved reader would offer. A number of investigators have addressed the association between funding and the conclusions derived from randomized controlled trials.1822 Results have been consistent: researchers are more enthusiastic about new interventions when funding comes from for-profit sources than from not-for-profit sources. In dentistry, investigators have documented that randomized controlled trials in which the authors reported conflicts of interest are more likely to report results supporting the intervention under study than those trials whose authors did not report conflicts of interest (odds ratio [OR] = 2.40; 95% confidence interval [CI], 1.16-5.13).21

This situation also affects systematic reviews and meta-analyses of drug interventions. Although industry-sponsored and nonindustry-sponsored reviews (for example, Cochrane systematic reviews) answering the same clinical question report similar treatment effect estimates, the former type of reviews provide more favorable conclusions.12 In summary, our advice is to read only the methods and the results sections of these articles, skipping the discussion section. However, to apply this guideline, clinicians must be able to assess the rigor of the methods and interpret the results.

Box 10.1. Example: Is an herbal mouth rinse effective to reduce the dental biofilm and the incidence of caries?

The investigators of a crossover randomized controlled trial conducted in 12 healthy participants compared two mouth rinses commonly available in the market: a special experimental formula with extract and essential oil of Baccharis dracunculifolia and a control mouth rinse based on a basic formulation that did not contain an active component.23 The investigators followed the participants for one week and measured the mean values of biofilm. In the results section of the article, the authors did not provide any numerical data and only referred to the fact that differences between the groups were not statistically significant. The investigators’ failure to show differences between the treatments resulted in no evidence provided to support the intervention. Nevertheless, a clinician reading the article and focusing on the discussion section would note the following: “Based on the result that there is the same efficiency of the B. dracunculifolia and already marketed mouth rinses, we suggest the use of this natural substrate for prevention and reduction of dental biofilm, as well as caries disease.”23 This statement in the discussion section of the article misleadingly suggests equivalence in terms of treatment effect between the mouth rinses but also refers to a reduction in the incidence of caries disease, an outcome that the investigators did not measure.

2. Read synoptic abstracts published in secondary publications (preappraised resources) for evidence-based dentistry.

Busy clinicians interested in using evidence to inform their clinical practice may not have time to skip the discussion sections of articles and instead critically appraise the evidence, and thus make sense of the results, by themselves. Secondary journals and sources, such as Evidence-Based Dentistry, Journal of Evidence-Based Dental Practice, and the American Dental Association’s Evidence Database, publish synoptic summaries in an abstract format that are accompanied by a brief summary of the original article and a critical appraisal conducted by a team of clinicians and methodologists. These abstracts, developed by independent third parties who have no conflicts of interest, reduce the distortion that the authors of a primary or secondary study may have introduced in the original article. Another objective of this type of synopsis is to educate clinicians about the methodological aspects of different study designs, thereby increasing clinicians’ critical appraisal skills.

Box 10.2. Example: Does periodontal therapy improve health outcomes and reduce medical costs?

The investigators of a retrospective cohort study addressed the impact of medical costs and inpatient hospitalizations five years after periodontal treatment in patients with type 2 diabetes, coronary artery disease, cerebral vascular disease, pregnancy, and rheumatoid arthritis.24 The authors’ conclusions were as follows: “These cost-based results provide new, independent, and potentially valuable evidence that simple, noninvasive periodontal therapy may improve health outcomes in pregnancy and other systemic conditions.” A synopsis published in the Journal of Evidence-Based Dental Practice provided a two-page summary of the original study, including a commentary and analysis.25 The author of the commentary stated, “It is very unusual for me to have very strong doubts about how a paper that makes such important claims yet has so many shortcomings gets published in a good refereed journal. This is such a paper.”25 After this, the author of the commentary provided a detailed explanation of the study’s limitations and the implications of the results in a way that clinicians could understand. The author of the commentary concluded that “the suggested implications for disease management based on the results they report are highly contentious and unjustified.”25

3. Beware of large treatment effects presented in trials with few events.

Clinicians often are appropriately skeptical of using evidence from the results of only one study and applying it in clinical practice. One argument is that the first studies that investigators conduct to determine the effects of an intervention usually have a small sample size (for example, fewer than 200 participants) and too few events. A meta-epidemiologic study published in the oral medicine and maxillofacial surgery literature showed that the investigators of small randomized trials (that is, those involving fewer than 200 patients) were more likely to report larger and more beneficial effects compared with the investigators of large randomized trials (that is, those involving at least 200 patients) (OR = 0.92; 95% CI, 0.87-0.98; P =.009).26 Most of the time, our therapeutic interventions target one or two of the many pathologic mechanisms involved in the genesis of a disease.27 This is why, not only in dentistry but also in medicine in general, few interventions are able to demonstrate a large and real treatment effect.

The results of a systematic survey whose investigators analyzed 85,000 meta-analysis results extracted from 3,082 systematic reviews showed that, in 10% of the cases, the results of the first trial showed statistical significance and a large treatment effect, which afterward proved to be much smaller in comparison with the results that the investigators initially reported.28 It is important to notice that, when few events are available, even systematic reviews, including meta-analyses, could have this problem. Readers applying this guideline should beware of treatment effects that look “too large to be real,” because they are likely to be misleading.

Box 10.3. Example: Does chlorhexidine oral rinse reduce mortality in patients in intensive care units?

The investigators of the first randomized controlled trial that addressed the effectiveness of oropharyngeal decontamination with 0.12% chlorhexidine gluconate oral rinse in patients in intensive care units suggested that this intervention reduces mortality by an astounding 80% (odds ratio [OR] = 0.20; 95% confidence interval [CI], 0.04-0.92).29 The investigators of this trial enrolled 353 patients and reported 12 deaths. The authors of a subsequent systematic review including 14 trials, 2,111 patients, and 511 deaths demonstrated no benefit; indeed, the best estimate was a 10% increase in mortality (OR = 1.10; 95% CI, 0.87-1.38).30

Only gold members can continue reading. Log In or Register to continue

Stay updated, free dental videos. Join our Telegram channel

Aug 4, 2021 | Posted by in General Dentistry | Comments Off on How to Avoid Being Misled by Clinical Studies’ Results in Dentistry

VIDEdental - Online dental courses

Get VIDEdental app for watching clinical videos