How Should We Evaluate and Use Evidence to Improve Population Oral Health?

Generating and implementing evidence-based policy is an important aim for many publicly funded health systems. In dentistry, this is based on the assumption that evidence-based health care increases the efficiency and effectiveness of interventions to improve oral health at a population level. This article argues that a linear logic model that links the generation of research evidence with its use is overly simplistic. It also challenges an uncritical interpretation of the evidence-based paradigm and explores approaches to the evaluation of complex interventions and how they can be embedded into policy and practice to improve oral health at a population level.

Key points

  • This article questions an uncritical adoption of the evidence-based paradigm for interventions to improve oral health at a population level.

  • A linear logic model that links the generation of research evidence with its use is overly simplistic.

  • This article explores approaches to the evaluation of complex interventions in dentistry and how they can be embedded into policy and practice.

Background

Half of the world’s population suffers from untreated oral conditions, affecting a total of 3.5 billion people in 2015; 2.5 billion people were affected by untreated caries in permanent teeth, 573 million children by untreated caries in deciduous teeth, 538 million people by severe periodontal disease, and 276 million people were affected by total tooth loss. Dental diseases produce large societal costs, both in terms of treatment costs and losses to productivity; for the twenty-eight countries in the European Union, dental diseases led to treatment costs of $100 billion (€92 billion) and productivity losses of $57 billion (€52 billion) in 2015.

Given this, generating and implementing evidence-based policy are important aims for many publicly funded health systems. In dentistry, this is based on the assumption that evidence-based health care increases the efficiency and effectiveness of interventions to improve oral health at a population level. It is increasingly recognized, however, that a linear or logic model that links the generation of research evidence with its use is overly simplistic. This article challenges an uncritical interpretation of the evidence-based paradigm and explores approaches to the evaluation of complex interventions and how they can be embedded into policy and practice to improve oral health at a population level.

The challenge of generating the evidence

The process of generating robust research evidence has traditionally relied on randomized controlled clinical trials (RCTs) to empirically evaluate interventions. Any observed effect is pooled statistically and the evidence is then synthesized to create evidence-based policies. Research evidence is then either pushed from the research community (in guidelines or evidence summaries) or pulled by clinicians who are seeking evidence-based approaches to inform their approach to care. There are several inherent difficulties, however, with a push-pull assumption when the intervention is complex or where it attempts to “introduce new, or modify existing, patterns of collective action in health care or some other formal organisational setting.”

The first problem is that the quality of many trials remains poor. In Glasziou and colleagues’ study, 40% to 89% of the interventions were not replicable due to a poor description and, in most studies, at least 1 primary outcome measure was changed, introduced, or omitted. In Yordanov and colleagues’ methodological review and simulation study of trials included in Cochrane reviews, 43% of the 1286 studies identified had at least 1 domain at high risk of bias and 142 of a random sample of 200 of the aforementioned trials were confirmed as high risk. Secondly, “trialists routinely claim that uncertainty doesn’t exist. We pick single point estimates for all of these parameters, create a design that would work well if all of those guesses happen to be true simultaneously (a very unlikely event) and then we put that design into a grant that we hope gets funded’. As further highlighted by Lewis, “this approach leads to an increased risk of falsely negative or inconclusive results.”

An even more fundamental issue is that effect sizes alone are not enough to facilitate the implementation of research findings in clinical practice or public health: “effect sizes do not provide policy makers with information on how an intervention might be replicated in their specific context, or whether trial outcomes will be reproduced.” As Grant and colleagues highlight, one of the reasons why so much clinical research is ignored is because “there is not enough contextual information provided to transfer the results from the trial setting into other settings.” A further problem is the common conflation of efficacy and effectiveness; demonstrating that a health technology works (efficacy) does not necessarily mean that it can improve health at a population level (effectiveness).

Another critique of the evidence-based paradigm relates to how evidence is synthesized and analyzed. Trials with positive results are published in approximately 4 years to 5 years, whereas trials with null or negative results take 6 years to 8 years to publish. Because multiple trials are required for 1 systematic review, they become highly resource intensive. This contrasts with the often rapidly moving policy context where structures at microlevel, mesolevel, and macrolevel (ie, at the levels of the clinician, commissioner of services, and governments, respectively) can change quickly. As Gannan and colleagues highlight, “emerging issues require access to high-quality evidence in a timely manner to inform system and policy response.”

Another concern with the process is that many systematic review methodologies have a tendency to strip out the policy context. This has led some researchers to adopt a theoretic approach to help guide the process of the review and make sure key elements are retained, particularly where the intervention is complex. Implementation frameworks, such as the Knowledge to Action framework, and other methods (for example, realist syntheses) explicitly seek to include and understand the role of context and how and why interventions or programs work. The authors consider this critical. As Northridge and Metcalf highlight, there is a “need to extract the core issues from the context in which they are embedded in order to better ensure that they are transferable across settings.” Such insights highlight the value of shifting from the traditionally used binary question of effectiveness toward a more sophisticated explanation.

Once evidence has been synthesized, the response by the evidence user can be idiosyncratic and these problems become magnified when interventions are introduced into complex social or organizational systems. Several system-related challenges relate to this process and introduce variation that needs to be considered and managed. Such challenges refer to the variability and stability (and predictability) across and within organizations, the range of solutions applicable to any given problem, the multiple mechanisms involved, the differing ability of the individual/organization to affect these mechanisms, and the varying relationships between mechanisms and outcomes (in terms of linearity and impact). Equally, evidence is often weighed alongside other clinical factors and experiential knowledge can be privileged. As a result, the production of evidence in its own right is not sufficient per se to influence change. Decision making is a process, not a one-off event, and relies on productive ongoing relationships and the organizational context.

Producing change in population oral health?

One of the key challenges relates to the relevance of the RCTs and the degree of their use to shape policy aiming to improve a population’s oral health. There is evidence that outputs from trials have had a direct impact on public health policy. Recently, Chestnutt and colleagues’ Seal or Varnish? trial led to a near immediate cessation of a national sealant scheme across Wales in favor of a fluoride varnish scheme. They concluded that “in a community oral health programme utilising mobile dental clinics and targeted at children with high caries risk, the twice-yearly application of fluoride varnish resulted in caries prevention that is not significantly different from that obtained by applying and maintaining fissure sealants after 36 months” and that fluoride varnish was more cost effective. Equally, Milsom and colleagues’ trial on dental screening programs for school-aged children produced a policy change by the National Screening Committee in the United Kingdom and Innes and colleagues’s trial on the Hall technique made a substantive impact on the management of child caries. This is in contrast, however, with several trials whose results have had less impact to date.

As highlighted previously, the use of the evidence-based paradigm can be applied without critical thought. At a population level, there are arguments for the inclusion of other study designs to augment the evaluation of dental public health programs and health policies. The recent debate after the publication of the Cochrane review on the effectiveness of water fluoridation illustrates this point. This review was influenced by the exclusion of observational studies and concluded that “there is very little contemporary evidence, meeting the review’s inclusion criteria.” In their critique, however, Rugg-Gunn and colleagues argued that “with public health interventions….there are frequently no such trials because the highly complex practical, ethical and financial factors involved mean that RCTs are not feasible.” They go on to argue that unlike individual clinical interventions, evidence has to be drawn from a wide variety of research designs to determine whether a complex public health intervention is cost effective. This approach was undertaken by the National Health and Medical Research Center (NHMRC) in Australia, which reached a different conclusion: “the NHMRC strongly recommends community water fluoridation as a safe, effective and ethical way to help reduce tooth decay across the population.”

Antibiotic prophylaxis for infective endocarditis is another example. This was common-place in the UK until 2008, when the National Institute of Care Excellence (NICE) stated that “antibiotic prophylaxis against infective endocarditis is not recommended for people undergoing dental procedures.” NICE relies heavily on evidence from clinical trials and evidence from other study designs downgraded; as such, it seemed locked into a recommendation that was at odds with the international consensus. It also became at odds with a large observational study that demonstrated that the cessation of antibiotic prophylaxis (NICE guidance) had increased the risk of patients contracting infective endocarditis. In recognition of this confusion and yet without any further evidence, NICE changed its recommendation in 2016 to “antibiotic prophylaxis against infective endocarditis is not recommended routinely for people undergoing dental procedures,” creating a great deal of confusion.

The use of taxation for sugar-sweetened beverages (SSBs) is another area where the uncritical adoption of the evidence-based paradigm is problematic. Empirically evaluating the impact of a sugar tax would require participants to be randomized to different price levels in any 1 country. This is unfeasible. Furthermore, making cross-country comparisons would be highly resource intensive and a systematic review using multiple trials even more unlikely.

Quasi-experimental methodologies have been used to show reduction in the consumption of SSBs and an increase in water consumption after implementing a sugar tax, while modeling studies have explored the potential impact of such an intervention on population health and the economy. In the absence of evidence from experimental evidence, a health care decision maker has to ask, Which other types of information are suitable for timely and evidence-informed decision making? Health policies, particularly with regard to public health, often need to be formulated at a time point when the respective evidence base is still limited. And the traditional evidence-based model around RCTs does not fit well public health interventions that require strong theoretic underpinnings, wider methodological approaches, and a focus on complex systems.

The application of theoretic approaches to help evidence use

Psychological theory is increasingly used to predict individual behavior change and improve the adoption of evidence. These theories set out to understand the proximal determinants of behavior including beliefs (cognitions), knowledge, and the attitudes and motivations that underlie an individual’s behavioral intentions and ultimately behavior. The underlying assumption is that understanding behavior is enough to produce changes at scale. Such approaches have been used in dentistry in relation to adherence to guidelines for fissure sealants, intraoral radiographs, caries management for children, and advising on oral health–related behaviors.

To date, psychological theories have been shown to be important because they target behavioral drivers that are potentially amenable to change. Recent developments have also seen an attempt to synthesize these. The Theoretical Domains Framework (TDF) brings together a large number of psychological theories and constructs that have been found to influence health professionals’ behavior. The 14 domains of the TDF include constructs, such as knowledge, skills, social/professional role and identity, and beliefs about capability.

The TDF has also been applied to dentistry: antibiotic prescribing, caries management, and the application of fluoride varnish to children’s teeth. There remains a lack of focus, however, on the organizational context, including practice culture and other factors that can influence individual clinician decision making. This is problematic because the implementation of evidence requires complex changes in clinical practice within complex health systems. These take place not because of individual behavioral processes but through collective action enacted by teams within health care organisations. For example, dentists do not adopt evidence-based preventive care because of a lack of inertia or up-to-date knowledge or skills but commonly because of practical (existing logistics of the dental practice), cultural (dentists’ perceptions of their patients and patient motivations, values, and cooperativeness), and economic (time constraints, financial risk, and funding systems) barriers. Arguably, rather than targeting different levels for effective change—individual clinician (eg, dentist and dental hygienist), health care unit or team (eg, dental practice), or health care organization (eg, National Health Service)—the system as a whole needs to be considered.

What can implementation science offer?

Given the persistent and often intractable challenges of evidence-based health care, there has been a growing interest in the study of implementation processes and approaches to unpack the black box. Implementation research reinforces the assertion that evidence production does not naturally flow into evidence use. As discussed previously, people use tacit and collective knowledge to determine whether evidence is credible and whether it fits with their experience and practice. Evidence users are not passive recipients and their practice is influenced by the context in which they work. Organization features, such as organizational slack, resources, the nature and quality of leadership, culture, and communication systems, are all important.

The evidence base suggests that there is more promise in approaches that are theoretically based, interactive, and tailored. For example, there is growing support for the use of change agents in implementation processes. One such change agent is facilitation, where evidence is 3 times more likely to be adopted. Training lay workers as facilitators of quality improvement in Vietnam showed a significantly positive effect on neonatal mortality. Implementation frameworks are also important in the choice and development of interventions, for identifying appropriate outcomes, measures, and variables of interest and in guiding the evaluation of implementation processes and outcomes. These include the Promoting Action on Implementation in Health Services, Knowledge to Action, the Consolidated Framework for Implementation Research, and normalization process theory. These frameworks help shift thinking away from viewing the gap between evidence and practice as being a service problem to one that acknowledges the importance of how knowledge is created. The idea that users and producers of evidence occupy 2 separate worlds has not been helpful in accelerating progress with the evidence-based practice agenda. As such, there is increasing interest in the development of more collaborative-type arrangements, such as Collaborations for Leadership in Applied Health and Care in the United Kingdom. Here, the producers and users of evidence work together to create knowledge that solve service challenges in more coproductive ways.

Discussion

This article argues that an uncritical adherence to the evidence-based paradigm is not always feasible, desirable, or ethical for complex health care interventions. In addition, it argues that evidence production is not enough to stimulate evidence use, particularly highlighting the importance of carefully considering the theoretic underpinnings of change and the role of the context for implementation.

There are several pragmatic steps that could be taken when designing trials of complex interventions to approve adoption. These include thinking about implementation a priori and working with policy makers, commissioners, public health officials, clinicians, and the public at the beginning of the evidence generation process to ensure that the research agenda is coproduced. Factors associated with the context of a complex intervention should also be considered at the earliest stage in the evaluation process, using theoretically informed feasibility and pilot studies. Theoretic frameworks should be used more prospectively as part of the trial design process for complex interventions (or other ex post methodologies). Equally, process evaluations should be run in parallel alongside empirical evaluations of complex interventions to help understand “the causal assumptions underpinning the intervention and…how interventions work in practice.” In addition, the use of Studies Within A Trial can help understand the best way to ensure adequate representation of those that are recruited.

The standardization of outcome measures used in trials of amenable population programs to promote oral health would also be of real value. As highlighted by Kirkham and colleagues, there is “growing recognition that insufficient attention has been paid to the outcomes measured in clinical trials, which need to be relevant to health service users and other people making choices about health care if the findings of research are to influence practice.” In the past, the heterogeneity of outcome measures used by many trialists has made meta-analysis difficult and has added to research waste. By taking a coproduced approach to developing a core outcome set, this type of research waste can be reduced. This heterogeneity of outcomes measurement has also been a feature of oral health research and work to validate core outcome sets (eg, the current project between the World Dental Federation and the International Consortium for Health Outcomes Measurement) may inform the consistent selection of oral health outcomes for relevant interventions.

More thought should be given to the type of evidence that is assimilated in systematic reviews of large-scale programs to improve oral health, including the use of ex post and ex ante designs. Ex post techniques typically evaluate the impact of an already implemented health policy program and include a range of quasiexperimental methods, including instrumental variables, difference-in-difference, panel data analyses using fixed or random effects, and regression discontinuity designs (see Listl and colleagues ). A recent study used such an approach to evaluate the impact of an SSB tax in Mexico. The study showed a significant reduction in consumption since its introduction in 2014. In contrast, ex ante techniques are designed to simulate the future response resulting from hypothetical interventions and to make comparisons with simulated alternative scenarios of interest to the decision maker. Ex ante methods include structural modeling, agent-based modeling, and microsimulation and unlike the ex post methods can help predict the short-term, mid-term, and long-term health effects of an intervention. Such methodologies can provide helpful information on the evaluation of policies and interventions that otherwise would not be rigorously evaluated as the standard RCT-related methodologies are neither feasible nor suitable.

More attention should be paid to an understanding of context and attempts should be made to not throw away evidence during the assimilation process that could help describe pathway to impact. Again, the use of theoretic frameworks and logic models to help guide the review process is key. Such approaches can “aid in the conceptualization of the review focus and illustrate hypothesized causal links, identify effect mediators or moderators, specify intermediate outcomes and potential harms, and justify a priori subgroup analyses when differential effects are anticipated.” They can describe the system into which the intervention and context takes place (system-based logic model) or the processes and causal pathways that lead to the outcomes (process-orientated logic model). They can also help identify the most relevant inclusion criteria and clarify the interpretation of results for policy-relevant conclusions.

Finally, more thought should be given to the use of realist reviews and rapid realist reviews in the dental literature, which specifically account for context and try to understand the underlying program theories (what works for whom, why, and in what circumstances). These would help to provide a more nuanced understanding and augment and broaden a triangulation process with existing evidence-based approaches for large-scale change in population oral health. Moving toward these suggestions presents a major but welcome challenge for oral health research because it would enrich the evaluation methodological scope and facilitate the wider use and implementation of appropriate evidence into clinical practice and public health, thereby having potential for improving the oral health of the population.

Disclosure Statement: The authors have nothing to declare.

Financial Disclosure: There are no known commercial or financial conflicts of interest or funding sources for any of the authors.

References

Only gold members can continue reading. Log In or Register to continue

Stay updated, free dental videos. Join our Telegram channel

Aug 9, 2020 | Posted by in General Dentistry | Comments Off on How Should We Evaluate and Use Evidence to Improve Population Oral Health?

VIDEdental - Online dental courses

Get VIDEdental app for watching clinical videos