A Model for Implementing Evidence-Based Decisions in Dental Practice

Fig. 5.1

Flowchart describing the different steps and components of the evidence-based model

5.2.1 PICO Question

A well-developed question in the PICO format may facilitate optimization and application of the evidence-based approach to dental practice. This method also may provide a framework for more effective literature search [7].
The PICO question was formulated as follows:
P (patient): adults with peri-implantitis.
I (intervention): surgical intervention.
C (comparison): nonsurgical intervention.
O (outcomes): clinical attachment level gain, pocket probing depth (PPD) reduction, and implant survival.
“In patients with peri-implantitis which is the most effective treatment (surgical or nonsurgical approach) with regard to PPD reduction, probing attachment level (PAL) change, and dental implant survival?”
The question should be sufficiently focused to enable the clinician to search for evidence that answers the problem for a specific patient or population.

5.2.2 Search Process and Selection of the Studies

The literature search strategy should be sensitive enough to retrieve all relevant literature that can help answer our PICO question. It should also be specific enough to retrieve only literature relevant to the topic in question. The key to a good search strategy is the balance between these two concepts. A busy clinician does not have time to filter all irrelevant information from a very sensitive literature search.
For the topic presented, we used two pairs of key words, “peri-implantitis” and “review” and “peri-implantitis” and “review,” in the PubMed and CENTRAL electronic databases on 1st January 2009. We limited our search to systematic reviews (SRs) in English. We also conducted a manual search to assess literature references in the SRs selected.
We searched for SRs of RCTs on treatment of peri-implantitis. Narrative reviews and consensus reports were excluded, as also were SRs that involved trials on animals.

5.2.3 Appraisal of the Studies Selected

After selection of the literature, we assessed its methodological quality using three standardized checklists:

1.

The Critical Appraisal Skills Programme (CASP). This was developed by professionals with backgrounds in public health, epidemiology, or evidence-based practice. CASP tools specifically developed for assessing different research designs were divided into three sections related to internal validity, results, and relevance to practice (www.phru.nhs.uk/Pages/PHD/FAQs.htm). The authors of CASP claim the tools were developed to help with the process of critically appraising articles on seven types of research: systematic reviews, RCTs, qualitative research, economic evaluation studies, cohort studies, case-control studies, and diagnostic test studies. In our specific case, for obvious reasons, only the checklist related to assessment of systematic reviews was used (Table 5.1).

Table 5.1

CASP checklist
Did the review ask a clearly focused question?
Did the review include the right type of study?
Did the reviewers try to identify all relevant studies?
Did the reviewers assess the quality of the studies included?
If the results of the studies have been combined, was it reasonable to do so?
How are the results presented and what is the main result?
How precise are these results?
Can the results be applied to the local population?
Were all important outcomes considered?
Should policy or practice change as a result of the evidence contained in this review?
 
2.

Quality of Reporting of Meta-analysis (QUOROM). This checklist was developed, by consensus, by a group of health professionals that included clinical epidemiologists, clinicians, statisticians, editors, and researchers [8]. It consists of eighteen headings and subheadings (Table 5.2) and describes the ideal way of presenting the sections of a systematic review (abstract, introduction, methods, results, and discussion). Authors of QUOROM suggest the use of the checklist to provide sound and reproducible results in systematic reviews and meta-analysis of RCTs [8].

Table 5.2

QUOROM checklist
Title identifies the report as a meta-analysis (or systematic review) of RCTs
The abstract uses a structured format
The abstract describes the clinical question explicitly
The abstract describes the databases and other information sources
The abstract describes the review methods (the selection criteria (i.e., population, intervention, outcome, and study design), methods for validity assessment, data abstraction, study characteristics, and quantitative data synthesis in sufficient detail to enable replication)
The abstract describes the results (characteristics of the RCTs included and excluded, qualitative and quantitative findings (i.e., point estimates and confidence intervals), and subgroup analyses)
The conclusion of the abstract describes the main results
The introduction of the review describes the explicit clinical problem, the biological rationale for the intervention, and the rationale for the review
The methods section describes the search strategy (the information sources (in detail, e.g., databases, registers, personal files, expert informants, agencies, hand-searching), and any restrictions (years considered, publication status, language of publication)
The methods section describes the selection of studies (the inclusion and exclusion criteria (defining population, intervention, principal outcomes, and study design))
The methods section describes the validity assessment (the criteria and process used (e.g., masked conditions, quality assessment, and their findings))
The methods section describes data abstraction (the process or processes used (e.g., completed independently, in duplicate))
The methods section describes study characteristics (the type of study design, participants’ characteristics, details of intervention, outcome definitions, and how clinical heterogeneity was assessed)
The methods section describes quantitative data synthesis (the principal measures of effect, method of combining results (statistical testing and confidence intervals), handling of missing data, how statistical heterogeneity was assessed, a rationale for any a-priori sensitivity and subgroup analyses, and any assessment of publication bias)
The results section shows a trial flow (flowchart figure)
The results section demonstrates study characteristics (descriptive data for each trial (e.g., age, sample size, intervention, dose, duration, follow-up period))
The results section describes quantitative data synthesis (report agreement on the selection and validity assessment; presents simple summary results (for each treatment group in each trial, for each primary outcome); presents data needed to calculate effect sizes and confidence intervals in intention-to-treat analyses (e.g., 2 × 2 tables of counts, means and SDs, proportions))
The discussion section summarizes key findings; discusses clinical inferences based on internal and external validity; interprets the results on the basis of all the available evidence; describes potential biases in the review process (e.g., publication bias); and suggests a future research agenda
 
3.

The Assessment of Multiple Systematic Reviews (AMSTAR). This checklist comprises 11 items derived from 37 items (Table 5.3). The tool is based on empirical evidence and expert consensus and has been externally validated [9].

Table 5.3

AMSTAR checklist
Was an “a-priori” design provided? The research question and inclusion criteria should be established before the review is conducted
Was there duplicate study selection and data extraction? There should be at least two independent data extractors and a consensus procedure for disagreements should be in place
Was a comprehensive literature search performed? At least two electronic sources should be searched. The report must include years and databases used (e.g., Central, EMBASE, and MEDLINE). Key words and/or MESH terms must be stated, and where feasible, the search strategy should be provided. All searches should be supplemented by consulting current contents, reviews, textbooks, specialized registers, or experts in the particular field of study, and by reviewing the references in the studies found
Was the status of publication (i.e., grey literature) used as an inclusion criterion? The authors should state that they searched for reports irrespective of publication type. The authors should state whether or not they excluded any reports (from the systematic review), on the basis of publication status, language, etc.
Was a list of studies (included and excluded) provided? A list of included and excluded studies should be provided
Were the characteristics of the included studies provided? In an aggregated form such as a table, data from the original studies should be provided on the participants, interventions, and outcomes. The ranges of characteristics in all the studies analyzed, e.g., age, race, sex, relevant socioeconomic data, disease status, duration, severity, or other diseases, should be reported
Was the scientific quality of the included studies assessed and documented? “A-priori” methods of assessment should be provided (e.g., for effectiveness studies if the author(s) chose to include only randomized, double-blind, placebo-controlled studies, or allocation concealment as inclusion criteria); for other types of studies, alternative items will be relevant
Was the scientific quality of the included studies used appropriately in formulating conclusions? The methodological rigor and scientific quality should be considered in the analysis and the conclusions of the review, and explicitly stated in formulating recommendations
Were the methods used to combine the findings of studies appropriate? For the pooled results, a test should be performed to ensure the studies were combinable and assess their homogeneity (i.e., chi-squared test for homogeneity, I2). If heterogeneity exists, a random effects model should be used and/or the clinical appropriateness of combining should be taken into consideration (i.e., is it sensible to combine?).
Was the likelihood of publication bias assessed? Assessment of publication bias should include a combination of graphical aids (e.g., funnel plot, other available tests) and/or statistical tests (e.g., Egger regression test)
Was the conflict of interest stated? Potential sources of support should be clearly acknowledged in both the systematic review and the studies included
 
Checklists were scored YES (assessed criterion was met in the systematic review), NO (assessed criterion was not met), cannot tell (when the information was unclear), or not applicable. The methodological quality was determined from the percentage of YES scores in each assessed study.

5.2.4 Statistical Analysis

We realize that interrater agreement is pivotal when a specific grade of reliability in the assessment of the quality of selected literature is expected. In other words, we become more confident when results may be reproduced by other colleagues using the same assessment strategy. Therefore, the quality of the methodology was assessed in duplicate by two referees and the interclass correlation-coefficient (ICC) was used to measure the level of agreement between the referees on all questions in the checklists. The level of agreement was considered good when the ICC was >0.8, substantial when it was 0.6–0.8, moderate when it was 0.4–0.6, fair when it was 0.2–0.4, and poor when it was <0.2 [10].

5.3 Results from Assessment of the Effectiveness and Methodological Quality of SRs

We included only SRs in our assessment because they are regarded the best available evidence for study therapies. After detailed assessment, two SRs [11,12] were retrieved from, initially, 90 potential studies.

5.3.1 Main Results of the Studies Selected

Kotsovilis et al. [11]: This SR demonstrated that the use of an ER:YAG laser might be superior to standard therapy (mechanical debridement/chlorhexidine) only with regard to bleeding on probing (BOP) reduction. These results are considered after six months of therapy. For the other therapy (minocycline+mechanical debridement), there was no clinically relevant difference from mechanical debridement only. Surgical ther apy (e.g., open flap+guided tissue regeneration– GTR) resulted in greater PPD reduction and CAL gain than nonsurgical approaches, at least after 6 months.
Esposito et al. [12]: This SR demonstrated that PPD reduction and CAL gain occurred after regenerative procedures for a period of six months. Surgical therapy usually resulted in more PPD reduction and CAL gain than conservative approaches, for example, mechanical debridement with implant scalers.
There were also improvements in PPD reduction and PAL in more severe cases when antibiotics were combined with mechanical debridement.
The authors of both SRs agree that the RCTs included have several methodological limitations (for example, small sample size/lack of power calculation, lack of true randomization), which can interfere with the reliability of the evidence presented.

5.3.2 Agreement Between Referees

Inter-observer agreement on the individual items from the checklists ranged from substantial to good. ICC scores for CASP, QUOROM, and AMSTAR were 0.73 (CI = 0.22–0.91), 0.93 (CI = 0.87–0.97), and 0.60 (CI = 0.25–0.83), respectively.

5.3.3 Methodological Quality of SRs

When conducting clinical research, researchers should pay attention to several “rules” to avoid introducing bias into study results. For example, a clinical study can truly be nominated as an RCT only when all the procedures related to the randomization process (sequence generation, allocation concealment, and implementation) are well conducted. This information should also be reported by the researchers in the paper presenting the results of the research. Study results should usually be replicated by an independent group, using the same methodology as the group of researchers who conducted the original research. This characteristic of reproducibility makes the study results more reliable and convincing.
In good medical and dental journals, there is a trend toward requiring researchers to follow the CONSORT (Consolidated Standards of Reporting Trials) statement [13] when conducting RCTs. This statement comprises a checklist containing 22 items that can be compared directly for different sections of the study. The authors of CONSORT suggest the use of the checklist for two reasons:

1.

Because empirical evidence indicates that not reporting the information is associated with biased estimates of the effect of treatment, or
 
2.

Because the information is essential for judging the reliability or relevance of the findings.
 
Some evidence also suggests that the use of the CONSORT statement may be associated with improvements in the quality of reports of RCTs [13].
In the evidence-based model presented, only SRs were selected and, therefore, checklists developed for study reviews were used for the assessment of the methodological quality and the report. On average, more criteria were met in the Esposito study than in the Kotsovilis study and the assessment, therefore, suggests that the methodological quality of the former SR is better than that of the latter.
Only gold members can continue reading. Log In or Register to continue

Nov 16, 2015 | Posted by in General Dentistry | Comments Off on A Model for Implementing Evidence-Based Decisions in Dental Practice
Premium Wordpress Themes by UFO Themes