Jocelyne S. Feine
Health care is increasingly expensive, and health care providers, consumers, insurance companies, and governments all want to know that the services provided are efficient and cost effective. In today’s society, the clinician has to be able to answer questions from patients and those who control payment plans. In some cases, clinicians must also defend their decisions before tribunals.
The fundamentals of good evidence-based health care are disarmingly simple: “doing the right thing” for patients (ie, providing appropriate care) and “doing the right thing right” (ie, providing appropriate care properly).1 The choice is based on the best available evidence and is guided by the preference of patients who have been fully informed about the possible associated risks and benefits. Ideally, it should cost no more than an equally effective alternative service.1
Historically, clinical practice was opinion-based. Clinical decisions were often made by the dentist or physician after little discussion with the patient. The treatment plans were based on a mix of knowledge gained through training, subjective perception of past experiences, practice traditions, and the opinions of recognized authorities. This resulted in highly variable diagnoses and treatments for the same condition, as well as ineffective, expensive, and sometimes harmful interventions.2 This situation is no longer acceptable to society.
The evidence-based clinical practice movement promotes the translation of new scientific evidence into clinical care. It means integrating personal clinical expertise with the best available external evidence drawn from research into the accuracy and precision of diagnostic tests; the power of prognostic markers; and the efficacy and safety of therapeutic, rehabilitative, and preventive regimens.3 To function in this new environment, health care providers must be able to (1) formulate answerable questions that stem from clinical issues, (2) track down the best evidence to answer them, (3) evaluate the evidence, (4) apply the results of the appraisal to their clinical practice, and (5) evaluate their own future clinical performance (audit). The questions posed should be as specific as possible and include the type of patient, the clinical intervention (if treatment choice is the subject of interest), and the appropriate outcome. Once the clinical questions have been properly formulated, a search of the literature can be carried out to find the relevant publications.4
Scientific literature is now readily accessible to anyone who wishes to find the information, particularly with the rapidly changing developments in computer resources and Internet communications. A review of the literature on therapies designed to treat pain produces a wide variety of publications ranging from systematic reviews to single case reports,4,5 which rank in value (from highest to lowest) as follows:
1. Systematic reviews
2. Randomized controlled trials (RCTs)
3. Controlled trials without randomization
4. Cohort or case-control studies
5. Descriptive studies (comparative or correlational)
6. Respected authorities, clinical experience (case reports), or expert committees
The best evidence of therapeutic efficacy will be found in properly conducted systematic reviews, particularly meta-analyses, of multiple well-designed RCTs. These are considered the gold standard for evidence because they evaluate the consistency of scientific results drawn from many RCTs carried out on different populations, often in different countries. If a systematic review or meta-analysis has not been undertaken for a relevant question, the next strongest evidence comes from at least one properly designed and executed RCT. If RCTs have not been done, the reader should look for nonrandomized trials, single-group pretreatment and posttreatment assessments, time series, or case-control studies. The last line of evidence on which to base a clinical decision comes from case series reports and the opinions of respected authorities in the field. Reports of expert consensus committees that are scientifically unsupported are given little credibility in this hierarchy, an approach that is very different from the tradition of valuing personal opinion of the expert above all other sources of information. A review of evidence-based pain management issues details the current state of evidence-based care and concludes that the necessary evidence to support the management of various types of pain conditions is not always available.2 For example, a systematic review on the efficacy of occlusal adjustment treatments for temporomandibular disorders (TMDs) found no evidence to support this common therapeutic approach.6 Similarly, another systematic review concluded that there is inadequate evidence available to draw conclusions on the efficacy of splint therapy to reduce pain associated with TMDs.7
Assessing the evidence of RCTs
To assess the quality of a clinical trial, one must understand the basic principles of study design. Some of these principles are illustrated below using a hypothetic study designed to test whether a new analgesic will reduce the postoperative pain associated with the placement of dental implants significantly faster than the standard medication.
1. Sample. The group of subjects enrolled in the trial should be a good, representative sample of the general population of patients who undergo implant surgery. If these are generally older adults, then this should be the chosen population.
2. Population size. The population size should be large enough to satisfy statistical criteria, and a statistical test to determine the number of subjects necessary to detect real differences (and similarities) should be reported in the publication.
3. Outcomes. The outcomes chosen should be appropriate and valid. If the purpose of the treatment is to reduce postoperative pain and if one wishes to show that the new analgesic reduces the pain faster than the standard treatment, then the primary outcome of the study should be patients’ ratings of pain measured over a period of time. Other outcomes of interest may also be used such as the use of medications and time off from work, among others.
4. Randomization. The treatments should be randomly allocated to patients undergoing implant surgery so that about half will receive the new analgesic and the rest of the patients will receive a standard analgesic.
5. Blinded controls. The study should be blinded, meaning neither the patients nor the clinicians should know which analgesic a patient receives. This is often accomplished by having the pharmacist prepare the medications so that they look exactly the same (eg, pills of same size, color, taste).
6. Appropriate statistical tests should be conducted to determine whether detected differences between treatments are significant and whether other factors (eg, age, sex, side effects) alter the outcome.
Box 18-1 presents an example of the use of evidence for deciding whether to use a new local anesthetic for acute pain control.
|Box 18-1 Evidence-based decision for use of new local anesthetic for acute pain|
|Strong evidence of efficacy: The new anesthetic was found to perform better than the old standard in all of seven well-run clinical trials.|
|Fair evidence of efficacy: The new anesthetic was shown to be superior to a standard anesthetic in one clinical trial.|
|Little evidence of efficacy: Recommendation made for other reasons; for example, the patient is allergic to other local anesthetics.|
|Fair evidence not to use the new anesthetic: The anesthetic was found to be equal in efficacy to less expensive alternatives.|
|Good evidence not to use the new anesthetic: Many people/>|