From bench-top to chair-side: How scientific evidence is incorporated into clinical practice

Abstract

Objectives

The objective of this manuscript is to describe the process through which bench-top research is incorporated into clinical practice from an evidence-based dentistry perspective.

Methods

Relevant literature is reviewed to describe the translation of bench-top research to clinical practice through the steps of preclinical testing; human clinical trials; systematic review development (question development, search/screen methods, evidence synthesis, and evidence appraisal); clinical recommendation development; dissemination strategies; the role of the clinician in finding and appraising relevant evidence; barriers to implementation with strategies to overcome those barriers; and finally, the fusion of evidence with clinician experience and patient needs and preferences in clinical decision-making.

Significance

Descriptions of processes, methodologies, tools, and resources are provided to help researchers and clinicians alike understand the steps that lie between bench-top research and clinical implementation. With mutual understanding of the complexity involved in translating research into practice, it is hoped that barriers to implementation can be overcome that should lead to improved patient health outcomes.

Introduction

The process of moving bench-top research to clinical practice is often called research “translation” . Several authors have reported that it takes 17 years for scientific knowledge (“evidence” in this context) to be translated and incorporated into clinical practice; however, Morris et al. point out that the convergence on 17 years may be a coincidence, one that hides the complexities of the translation process. There is no common set of standard measurement points or even agreement of the process model itself to definitively answer how long it takes for bench-top research to be applied clinically.

It can be agreed, however, that it takes a long time to implement research in practice, and in the last decade, it may even be taking longer. In 2004, the Food and Drug Administration (FDA) published a report stating that the medical product development process was “increasingly challenging, inefficient, and costly. During the past several years, the number of new drug and biologic applications submitted to FDA has declined significantly; the number of innovative medical device applications has also decreased…(and) the path to market even for successful candidates is long, costly, and inefficient” . To counter this trend, the FDA has launched the Critical Path Initiative , which is “…FDA’s strategy to drive innovation in the scientific processes through which medical products are developed, evaluated, and manufactured”.

Fig. 1 presents the major steps in the research translation process, which is adapted from several sources . There is a gap in the process of translating bench-top research to human clinical research, which is often called “T1”.

Fig. 1
Simplified process of translating bench-top research to clinical practice, which involves several steps. T1 is the translation of bench-top research to human clinical trials. The human clinical research is synthesized into systematic reviews. T2 is the translation of clinical knowledge into everyday practice. Research synthesis is often incorporated in clinical guidance, but need not be. Research synthesis does not imply clinical implementation. Outcomes of implementation into clinical practice ideally include improvements in patient health and, if there is widespread implementation, population health. Note that this process is bidirectional, with clinical practice also feeding in the opposite direction and informing clinical and bench-top research .

As will be described in more detail later in the manuscript, research synthesis into SRs is not the end of the process , and it does not mean that the bench-top research has made it chair-side. At this point the research has been translated into clinical knowledge, but it has not been translated into clinical guidance nor implemented into clinical practice. A gap exists at this stage in the process, often labeled “T2”.

When it is determined that a particular topic needs to be summarized, the human clinical research is synthesized into systematic reviews (SRs).

This paper describes the challenges in moving from bench-top to human clinical research, addresses the process of the generation of clinical knowledge and guidance, discusses challenges in implementation into clinical practice, and touches on shared decision making with patients.

Translating bench-top research to human clinical research (T1)

Sung et al. defined T1 as “the transfer of new understandings of disease mechanisms gained in the laboratory into the development of new methods for diagnosis, therapy, and prevention and their first testing in humans.”

Bridging this gap is not unique to the field of dental materials. Since, to the author’s knowledge, there has not been an assessment of the barriers to negotiating the T1 gap for dental materials researchers, some general advice for bridging the gap for medical research in general includes: (1) educating researchers and clinicians about the translation process; (2) standardizing translation across institutions; (3) facilitating interdisciplinary research teams, academic-industry partnerships, and researcher–clinician connections; (4) improving infrastructure including shared facilities; and (5) funding positions that provide program/project management, institutional review board (IRB) process management, intellectual property management, informatics support, and facilitation of industry-academic liaisons .

To facilitate dental materials researchers in bridging the T1 gap, it is useful to think about dental materials and their place in the larger system of biomaterials, medical devices, and finally medical products as illustrated in Fig. 2 . This categorization helps bench-top dental materials researchers to navigate the appropriate regulatory guidance and requirements that are needed when developing a new material for clinical application. Note that this overview is not an exhaustive review of the guidance that is necessary, but is intended to provide a starting point for further investigation.

Fig. 2
Dental materials in context as a subset of larger categories of medical products from a regulatory point of view.

Fig. 3 illustrates the complex nature of bench-top research, the successful negotiation of which will close the T1 gap. The FDA has identified three dimensions of bench-top research (also called preclinical testing) which are: medical/dental utility, safety, and industrialization.

Fig. 3
Closing the T1 gap involves successfully navigating the requirements for medical/dental utility, safety, and industrialization. Test development and validation between the laboratory and clinic are crucial to improve the research translation process.

For a material to be viable for further development in dental applications, medical/dental utility needs to be shown by, for example, material property testing to show the product performs as required in the environment of use. Standard test methods that are available from ANSI/ADA and the International Standards Organization (ISO) for different material/application combinations, and should be considered when comparability of results is important.

Safety can be assessed by biocompatibility testing. Several standards give guidance on what types of tests should be performed as well as how to conduct the tests. Preclinical evaluation of dental materials’ biocompatibility is described in ANSI/ADA Specification No. 41 and other relevant documents and based on the type and duration of exposure to tissues. In addition, the FDA publishes guidance documents intended to help disseminate information about what is required to receive FDA approval .

Because of the critical role that processing has on materials properties, research related to industrialization is key to improve performance. Collaborations between researchers, manufacturers, and dental laboratories would help ensure this type of research is directly utilized in dental product manufacturing.

If academic researchers discover new or novel materials and/or properties, they should also be aware that there are grants available for translational research to help them develop the inventions for commercial use . More efforts are needed to educate researchers on patenting and licensing procedures if applicable.

The last element involved in closing the T1 gap is the need for improved testing procedures, safety and efficacy predictive models, and validation of laboratory testing with clinical performance. By enabling new products to be developed and moved to the market faster, cheaper, and with more certainty, the trend in stagnation of medical product innovation should reverse .

Materials researchers need to provide meaningful outcomes to the broader community to enable the results to be translated to the clinic. It may be helpful to consider the context of dental materials research to design studies that provide these meaningful results. Ultimately, the purpose of dental materials research is to help patients be healthier and live better lives.

Translating bench-top research to human clinical research (T1)

Sung et al. defined T1 as “the transfer of new understandings of disease mechanisms gained in the laboratory into the development of new methods for diagnosis, therapy, and prevention and their first testing in humans.”

Bridging this gap is not unique to the field of dental materials. Since, to the author’s knowledge, there has not been an assessment of the barriers to negotiating the T1 gap for dental materials researchers, some general advice for bridging the gap for medical research in general includes: (1) educating researchers and clinicians about the translation process; (2) standardizing translation across institutions; (3) facilitating interdisciplinary research teams, academic-industry partnerships, and researcher–clinician connections; (4) improving infrastructure including shared facilities; and (5) funding positions that provide program/project management, institutional review board (IRB) process management, intellectual property management, informatics support, and facilitation of industry-academic liaisons .

To facilitate dental materials researchers in bridging the T1 gap, it is useful to think about dental materials and their place in the larger system of biomaterials, medical devices, and finally medical products as illustrated in Fig. 2 . This categorization helps bench-top dental materials researchers to navigate the appropriate regulatory guidance and requirements that are needed when developing a new material for clinical application. Note that this overview is not an exhaustive review of the guidance that is necessary, but is intended to provide a starting point for further investigation.

Fig. 2
Dental materials in context as a subset of larger categories of medical products from a regulatory point of view.

Fig. 3 illustrates the complex nature of bench-top research, the successful negotiation of which will close the T1 gap. The FDA has identified three dimensions of bench-top research (also called preclinical testing) which are: medical/dental utility, safety, and industrialization.

Fig. 3
Closing the T1 gap involves successfully navigating the requirements for medical/dental utility, safety, and industrialization. Test development and validation between the laboratory and clinic are crucial to improve the research translation process.

For a material to be viable for further development in dental applications, medical/dental utility needs to be shown by, for example, material property testing to show the product performs as required in the environment of use. Standard test methods that are available from ANSI/ADA and the International Standards Organization (ISO) for different material/application combinations, and should be considered when comparability of results is important.

Safety can be assessed by biocompatibility testing. Several standards give guidance on what types of tests should be performed as well as how to conduct the tests. Preclinical evaluation of dental materials’ biocompatibility is described in ANSI/ADA Specification No. 41 and other relevant documents and based on the type and duration of exposure to tissues. In addition, the FDA publishes guidance documents intended to help disseminate information about what is required to receive FDA approval .

Because of the critical role that processing has on materials properties, research related to industrialization is key to improve performance. Collaborations between researchers, manufacturers, and dental laboratories would help ensure this type of research is directly utilized in dental product manufacturing.

If academic researchers discover new or novel materials and/or properties, they should also be aware that there are grants available for translational research to help them develop the inventions for commercial use . More efforts are needed to educate researchers on patenting and licensing procedures if applicable.

The last element involved in closing the T1 gap is the need for improved testing procedures, safety and efficacy predictive models, and validation of laboratory testing with clinical performance. By enabling new products to be developed and moved to the market faster, cheaper, and with more certainty, the trend in stagnation of medical product innovation should reverse .

Materials researchers need to provide meaningful outcomes to the broader community to enable the results to be translated to the clinic. It may be helpful to consider the context of dental materials research to design studies that provide these meaningful results. Ultimately, the purpose of dental materials research is to help patients be healthier and live better lives.

Translating clinical research to clinical knowledge: generating a systematic review

Those dental materials or other medical products that have successfully navigated the T1 gap move on to the next step, which is testing in human clinical trials. If approved by the FDA, new products can be marketed and are available for clinical use. However, publications on the results of trials conducted for FDA approval are only a fraction of clinical trials that are published. When an individual or group determines that the evidence on a specific topic needs to be systematically gathered and synthesized, a systematic review (SR) is developed. Conducting an SR entails developing specific question(s); identifying, collecting, and screening potentially eligible citations; evaluating (appraising) the publications; summarizing the totality of the evidence; and discovering gaps in knowledge for further research. Note that in evidence-based medicine (EBM) and dentistry (EBD) the highest level of evidence is SRs of preferably randomized controlled trials of human clinical research; and accordingly, bench-top studies are the lowest level of evidence and not typically included in SRs. The following section provides more details of the steps involved in generating an SR.

Ask question

The first step in conducting an SR is to ask a specific clinical question. The best format for the question is generally called “PICO”, meaning P atient I ntervention C omparator O utcome [and sometimes PICOTS, which also includes identification of the T iming and S etting ]. This format assists in designing the literature search strategy. In SRs, the questions are usually pertinent to a broad audience such as a profession or population. They are typically conducted by associations, government, or other entities, but they are also conducted by small teams of authors.

Search and screen literature

The next step is searching the literature and screening the search results for relevance to the question. SRs are comprehensive and aim to capture and summarize all the literature on a specified topic in a methodical and reproducible manner. A literature search strategy is developed to address the PICO question(s). Medical librarians are good resources to help in developing a literature search strategy. Inclusion and exclusion criteria, which state what types of studies are to be included in and excluded from the systematic review based on specific characteristics, should be determined prior to conducting the literature search and clearly reported.

Once the search strategy has been refined, the systematic and comprehensive literature search is conducted to find the evidence. Generally accepted SR methodology includes searching at least two electronic databases. Screening of citations is typically conducted in two stages: by title and abstract followed by full text. It is recommended that both screening steps be conducted independently and in duplicate with disagreement resolution methods described . Inclusion and exclusion criteria are applied to screen the retrieved citations to determine if the retrieved articles contain all of the required and none of the excluded elements of the question.

Study characteristics (determined a priori), relevant outcomes data, and study conduct (study methodology) information are extracted from the included studies independently and in duplicate, with resolution typically conducted by consensus. These data are collected into evidence tables.

Synthesize information

For an SR, outcomes are combined either verbally (qualitatively) or statistically if appropriate using meta-analysis. Meta-analysis is a methodology whereby individual study results are combined mathematically with the purpose in increasing the power (and accuracy) of the combined result. The results are typically presented in a figure called a “forest plot”.

Fig. 4 shows a hypothetical forest plot of a meta-analysis of the data from five studies, which was generated using RevMan . Other statistical software packages are also available to conduct meta-analyses. The interested reader is referred to meta-analysis references such as Borenstein et al. or Higgins and Green for detailed information about procedures and equations used to perform these analyses.

Fig. 4
A hypothetical forest plot of five studies that shows the resulting summary estimate of the magnitude of the effect with the 95% confidence interval (A), the estimates of heterogeneity (B), and a visual depiction of the results of each individual study as well as the summary estimate. The Z -value is also shown at the bottom of the figure. “IV” indicates “inverse variance” method, and “Random” indicates random effects (vs. “fixed effect”) model. The individual study data are also shown as means, standard deviations (SD), and total (number of participants in the experimental and control arms of the trials).

The key characteristic to take away from the forest plot at this point is the summary estimate of the magnitude of effect (or “magnitude of benefit” if the outcome is beneficial). In Fig. 4 , it is indicated by a large diamond at the bottom of the figure and corresponding “Total (95% CI)” indicated as 0.12 (−0.09, 0.34) and denoted by letter A . A wider diamond indicates a wider confidence interval and less precision in the effect estimate and vice versa. Whether or not the effect estimate crosses the vertical line of no effect (or null) is also of interest. A diamond not crossing the null indicates a statistically significant result, whereas a diamond crossing the null indicates no statistical significance. The Z -value indicates the number of standard deviations the mean is away from the null, and it gives another indication for the presence of a statistically significant effect. Note that statistical significance does not automatically indicate clinical relevance .

The heterogeneity statistics are also presented at the bottom of the forest plot, and denoted by letter B in Fig. 4 . These statistics help in the determination of the degree of variability within and between studies. Heterogeneity is one aspect of the assessment of consistency of the results. The results of these assessments are combined into evidence profiles , a tool that helps organize the information systematically, which will be discussed further in the following section.

Evidence appraisal

The primary reason to critically appraise the evidence is to determine to what extent the results can be trusted, in other words, what the level of certainty is in the estimate of the effect.

This refers to the internal validity of the trial. There are several tools available to critically appraise individual studies. Different tools are used for different study designs. Cochrane’s Risk of Bias Tool is useful for randomized controlled trials; the Ottawa-Newcastle tool is useful for case–control and cohort studies; Oxford appraises studies by trial designs. Many critical appraisal tools are available at the University of South Australia’s website . None has been shown to be superior to the others in terms of repeatability or bias, but there is general consensus on the types of “domains” that should be addressed when critically appraising an article .

The Oxford system is one of the first used to assess the level of evidence of individual studies. It has been modified over many years to include appraisal of studies addressing other types of questions beyond therapy effectiveness, and it now includes etiology, harm, prognosis, diagnosis, prevalence, and economic and decision analysis ratings.

The Cochrane Collaboration has published a tool to assess the risk of bias of individual studies across several domains, including selection, performance, detection, attrition, reporting, and other biases, which is used in their SRs.

When conducting an SR, included studies are critically appraised independently and in duplicate, and conflicts resolved by a pre-specified method such as discussion.

Translating clinical knowledge to clinical guidance

There is a small step in the T2 gap bridging research synthesis (in the form of SRs) and implementation. This step is the generation of specific clinical guidance intended for the clinician to apply to practice ( Fig. 1 ). Not all SRs go through this step, but all evidence-based clinical guidance is grounded in SRs.

Definitions

Even among experts, there seems to be some confusion about the relationship between SRs and clinical recommendations (CRs) [for the purposes of this document, the terminology “clinical recommendation” will be used, and is synonymous to the term “clinical practice guideline” (CPG)]. SRs and CRs are very distinct evidence-based documents.

According to the Institute of Medicine (IOM) , an SR is “a scientific investigation that focuses on a specific question and uses explicit, pre-specified scientific methods to identify, select, assess, and summarize the findings of similar but separate studies. It may include a quantitative synthesis (meta-analysis), depending on the available data.” The two ultimate outcomes of an SR on a topic where there is adequate evidence are (1) summary estimates of the magnitudes of the effects (ideally, both benefits and harms) and (2) a level of certainty in those estimates (sometimes referred to as the quality of the evidence, depending on the appraisal system that is used).

“Clinical Recommendations” (terminology used by the ADA) otherwise known as a “Clinical Practice Guidelines” are defined by the IOM as “statements that include recommendations intended to optimize patient care that are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options” (bold added). CRs are typically generated by a panel of experts convened on behalf of a professional association or other entity. Standards on the components of high quality CRs have been published . The term “CR” is used to mean the formal document, which could contain several individual recommendation statements for the clinician.

The foundation of a CR is the SR (or multiple SRs) of existing evidence. CRs can rely on a de novo SR conducted specifically for those CRs (as in the ADA’s current process) or be based on previously published SRs (see for example Rosenfeld et al. or ADA’s CR on topical fluorides for caries prevention ).

Fig. 5 A and B provide visual models of the relationship between these documents. Fig. 5 A depicts a combined CR/SR document, where SRs of several interventions (in this case three) were conducted. Fig. 5 B depicts separate documents, with three separate SRs as the foundation for a stand-alone CR.

Fig. 5
A clinical recommendation document is founded on a systematic review, which may include evidence statements on several treatment options. The evidence statements typically summarize outcomes measures and critical appraisals of the evidence. A (top) illustrates a combined CR/SR. B (bottom) illustrates a CR relying on separate SRs.

Another source of confusion is the fact that there are different rating systems to assess and rate the evidence in an SR (“evidence quality” or “level of certainty in the evidence” ), strength of recommendations , methodological quality of SRs , methodological quality of CRs , and risk of bias of individual studies . Some emphasize patient-oriented outcomes . Early evidence rating systems were based on “levels of evidence” and categorized evidence primarily by study design . As the field of evidence-based health care has matured, the more recent rating systems for primary studies take into account the conduct of the clinical trial in terms of the risk of bias .

Determining the level of certainty in the estimate of effect of a body of evidence

When an SR is being conducted specifically for the development of a CR, the body of evidence needs to be summarized as a whole. The “level of certainty of the estimate of the effect” (or “quality of evidence”) has more dimensions that the risk of bias of individual studies, which accounts for the internal validity of the included studies. The factors that are taken into account along with internal validity are specific to the system that is being used. A brief description of these systems follows here, and the next section will describe the ADA’s system in more detail.

Many systems have been developed over the years that “grade” evidence . The IOM has not endorsed any system to the exclusion of others . There appears to be some consensus around the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) system, and it has been adopted by many organizations including the Cochrane Collaboration . Although there are many systems and the details vary, key similarities include assessments of (1) the risk of bias or limitations of the included studies; (2) consistency of the evidence; and (3) applicability of the evidence to the population of interest. Some systems rate the evidence for (1) evidence of publication or reporting bias, (2) precision and (3) directness, while others include the number and size of the studies and the relationship of the evidence in the “chain of evidence” otherwise known as the “analytical framework”.

ADA’s process to determine the level of certainty in the effect

The ADA’s method is based on the United States Preventive Services Task Force (USPSTF) method with modifications that include elements from GRADE. The two systems are generally similar, but GRADE has published tools to support its use , which have been adapted to incorporate the USPSTF criteria. Another difference is the number of levels of “certainty” or “quality” of the evidence: the USPSTF (and ADA) have three (high, moderate, and low), while GRADE supports four (high, moderate, low, and very low).

The essentials of the USPSTF criteria are shown in Table 1 , which has been modified from ADA’s Handbook .

Table 1
Criteria for consideration when assessing the level of certainty in a body of evidence. a
Level of certainty in effect estimate Description
High This statement is strongly established by the best available evidence; the
conclusion is unlikely to be strongly affected by the results of future studies.
The body of evidence usually includes consistent results from well-designed, well-conducted studies in representative populations.
Moderate This statement is based on preliminary determination from the current best available evidence; as more information becomes available, the magnitude or direction of the observed effect could change, and this change could be large enough to alter the conclusion.
Confidence in the estimate is constrained by one or more factors, such as:
•the number, size, or risk of bias of individual studies;
•inconsistency b of findings across individual studies;
•limited applicability due to the populations of interest; or
•lack of coherence in the chain of evidence.
Low The available evidence is insufficient to support the statement or the statement is based on extrapolation from the best available evidence; more information could allow a reliable estimation of effects on health outcomes. Evidence is insufficient or the reliability of estimated effects is limited by factors such as:
•the limited number or size of studies;
•important flaws in study design or methods leading to high risk of bias;
•inconsistency b of findings across individual studies;
•gaps in the chain of evidence;
•findings not applicable to the populations of interest; or
•a lack of information on important health outcomes.
Only gold members can continue reading. Log In or Register to continue

Nov 25, 2017 | Posted by in Dental Materials | Comments Off on From bench-top to chair-side: How scientific evidence is incorporated into clinical practice
Premium Wordpress Themes by UFO Themes