Science in the Practice of Clinical Dentistry

This article informs dental clinicians on the essential workings of scientific research and statistical analyses. It provides clinicians with the essential knowledge necessary to understand and review scientific work.

Key points

  • Comprehend study design and its impact on research quality and meaning.

  • Learn essential biostatistical tests and analyses.

  • Understand what is meant by statistical significance and its relation to clinical significance.

The rationale and challenges of implementing scientific data into clinical practice

Extensive amounts of time and energy are spent by clinicians learning how to be the most proficient operators they can be, learning how to execute treatment at increasingly exceptional levels. They know that their skills, materials, techniques, diagnostic abilities, and treatment planning abilities ultimately determine how successful they are in addressing the needs and desires of their patients.

Clinical decision making is a complex skill. It requires the synthesis of hundreds of questions for every decision: material selection, preparation design, implant locations, and so forth. Some of these complex decisions are made out of habit or tradition, where clinicians rely (not necessarily incorrectly) on their mentors, previous successes, and training. Ultimately, clinicians must continue to evaluate and adapt and learn and move forward in producing improved results. Clinicians must strive to be perpetual students, to continually endeavor to improve their skills and knowledge. These improvements might be in longevity, aesthetics, ease of use, patient comfort, patient health, duration of treatment, economics, or consistency.

When making any clinical decision, there are 4 areas to take into consideration:

  • 1.

    Patient desires (ie, finances, time constraints, aesthetics)

  • 2.

    Clinical evaluation (ie, bone volume, American Society of Anesthesiologists classification, functional loads)

  • 3.

    Unique clinical experience (eg, “We have been very successful immediately loading implants in our practice”)

  • 4.

    Scientific evidence (ie, most systematic reviews show little increase in risk for immediate load implant protocols when carefully selected and skillfully executed)

None of these 4 areas should be neglected, although at times clinicians may find that the strength of one outweighs another. Their patients will be best served, and their outcomes improved, when clinicians successfully incorporate as much information as possible from each area. Scientific studies are not the be-all and end-all of clinical decision making. It represents only a piece. Clinicians’ anecdotal experiences should carry weight in the decision-making process. Conflicts arise between what clinicians have seen to be true, in their hands, and the scientific literature. The potential reasons for this may include differences in patient populations, differences in surgical approach, differences in implant systems, and differences in the skill of technicians. Individual clinicians develop standard operating procedures based on the unique situation but efforts should be made to increase the understanding of when alternative materials and techniques should be implemented for a given clinical scenario. Expertise is built on accurate answers to hundreds of questions from all 4 areas in every clinical decision that clinicians make.

However, this article is about the science part of this process. The other 3 areas are not addressed here but should in no way be neglected. Properly interpreted and understood, scientific evidence can significantly improve results. Scientific studies give insight into the expected success rates of a new material, complications that can be expected with a particular treatment protocol, or which patients might be at increased risk for failures.

There are a few requirements, though: the science must be sound, the analysis appropriate, the question answered relevant, and the interpretation cautious. The job of clinicians is not to determine whether the science was sound or whether the statistical analysis was appropriate. That is the job of biostatisticians, editors, and reviewers. The job of clinicians is to determine whether the study question being answered is relevant to the current bigger clinical question. Clinicians must also determine whether the interpretation of the study is correct. It is all too common in the scientific literature to see conclusion statements woefully undersupported or even countered by the data in the study.

A Hypothetical Vignette

Imagine that a new 1-piece implant has come to market with US Food and Drug Administration (FDA) approval. Clinicians are interested in switching over to it for most of their implants and the patients are asking for it, but will it work? This is not a binary question, and perhaps there will be indications where it will and will not prove sufficiently successful. There is no objective threshold for success. What one clinician might classify as successful might be intolerable to another. At any rate, clinicians are considering surgically implanting this new device into a vast number of their patients and results matter.

The clinicians ask a manufacturer representative to come by the office. They are shown a bar graph from an osseointegration study published in a reputable journal, and the new implant’s bar is the tallest. It has superior bone to implant contact compared with the control, and in another study, this one testing dry static axial load to failure, the new implant’s number is the biggest and there is a P value of .02. Then there is a quote from a supposedly famous dentist: she loves the new implant and so do her patients.

So far, everything looks promising and the clinicians decide to switch over to the new implant. For 12 months, many of the implants placed in the practice are the new implant; hundreds of them. The patients are happy, and the referrals are flowing in, but then the failures begin to show up. Slowly at first, then en masse. Peri-implantitis, prostheses repeatedly debonding, abutments fracturing. It seems that osseointegration and dry static axial load to failure are not the only factors that the clinicians should have considered before aggressively incorporating this material.

Looking back, the clinicians wonder how this implant got through the FDA clearance process. Without going into the granular details of the process, suffice to say that dental devices are generally FDA approved without need for extensive testing of safety and efficacy. In general, it is done by claiming (to the FDA) that the new device or material is largely similar to previously approved devices. So what did the clinicians miss in their cursory evaluation of this implant and what should they have done differently? Assuming the 2 studies they flipped through were properly performed and analyzed, the osseointegration and dry static axial load of the implant are as good as or superior to the current implants. However, those are only 2 of the hundreds of questions clinicians should be asking about whether or not this implant is viable. This implant is 1 piece, so why are none of the other implants clinicians use of this design? What are the clinical challenges of this design? Are the results of a dry static axial load test really correlated with how implants are treated in the oral environment? Is there a better way clinicians should test load to failure? Was the famous dentist raving about this implant vested in the manufacturer?

I am proposing that, in this scenario and hundreds of others like it, clinicians will be better able to identify and avoid potential problems if they have an understanding of the literature. Science cannot avoid all problems in clinical dentistry, but I hope that this article helps provide a start on a journey of better understanding of the scientific literature that clinicians rely on (directly or indirectly) to make clinical judgements. A proficient understanding of the science and literature will not occur immediately. It takes time and effort. As a starting point, I suggest that clinicians make a habit of finding a reputable, scientifically based, clinically oriented journal in their area of focus and read through it on a regular basis. This habit will increase familiarity and comfort with how studies are performed, analyzed, and interpreted. The learning curve may be steep at first. Ignore studies that are of no interest. Read through the others with the understanding that it is not necessary to grasp every aspect of what is presented. With time, a greater level of comfort and confidence will develop and I believe the reader will become significantly more empowered to make strong, science-backed decisions.

Types of studies and their value in clinical practice

Most clinicians have seen something like the hierarchy of evidence pyramid ( Fig. 1 ). A few things about this: it is not universally agreed on and various disciplines in medicine and dentistry have variations they lean on (more than 80 such hierarchies have been published). Epidemiology uses a vry different hierarchy than orthopedic surgery for example. Nor does the pyramid mean that any conclusion from an upper level is always more correct than a conclusion from a lower level. Table 1 provides a brief explanation of the various types of studies and their use in clinical dentistry.

Fig. 1
A hierarchy of evidence for clinical dental research. This hierarchy is best understood as a risk for bias pyramid, not a hierarchy of truth. The higher levels of evidence are more likely to be correct, but they are not implicitly superior. Sys, systematic.

Table 1
Levels of evidence for clinical treatment disciplines
Study Design What It Is Relevance to Clinical Dentistry
Systematic reviews of homogenous RCTs An expert synthesis of the best available interventions on a particular topic This is the highest level of evidence. However, rarely seen in clinical disciplines because of the sparsity of strong RCTs
Strong RCTs A random, controlled, blinded trial. The only way to establish cause and effect Rare in any surgical discipline (especially clinical dentistry). Best available new evidence
Systematic review of cohort studies An expert synthesis of the best available observational studies Good for summarizing risk factors associated with an outcome
Individual cohort studies A high-level observational study. No treatment is performed on patients as part of the study In general, a tool used less in clinical dentistry and more in the public health disciplines and epidemiology
Systematic review of case control studies An expert synthesis of the best available research for risk factors for rare outcomes Not common in clinical dentistry
Case control studies A cross-sectional study used to identify risk factors for rare outcomes An efficient way to identify risks for diseases not often seen
Case series A series of cases showing proof of concept. Not proof of expected results or expected complications Useful in seeing what might be possible by experts. Useful for exploring new techniques or materials. Interpret cautiously
Expert opinion A well-regarded individual’s or group’s opinion on a topic. Reliability and accuracy depend on the veracity and knowledge of the expert Efficient and practical method for finding information. High risk potential for bias
Only gold members can continue reading. Log In or Register to continue

Oct 7, 2020 | Posted by in General Dentistry | Comments Off on Science in the Practice of Clinical Dentistry
Premium Wordpress Themes by UFO Themes