2 Clinical Research Designs

2

Clinical Research Designs

Robert J. Weyant, MS, DMD, DrPH

Professor and Chair, Department of Dental Public Health and Information Management, School of Dental Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania

Introduction

Dr. Jones is an orthodontist who recently graduated from training and is now in private practice, having purchased her practice from a retiring orthodontist. After several months, Dr. Jones noted that she was receiving a large number of referrals from community general practice dentists of young children aged 7–9 who have prominent front teeth (i.e., Class II malocclusion). The referrals were implying that the young patients would benefit from early treatment, and most of these patients were told by their referring general dentists that if they received “early” treatment (by age 9), they could avoid more extensive treatment when they were older (in adolescence, after age 12). Dr. Jones was happy to have the referrals but was not sure she could tell the patients with confidence that they would be less likely to need orthodontic treatment as adolescents if they received “early” treatment now. Moreover, Dr. Jones was taught that both headgear and functional appliances were appropriate approaches for treatment of children with prominent upper front teeth but was not sure which approach would be best. Dr. Jones felt that she needed more information so that she could discuss treatment in an informed manner with her patients and make scientifically sound clinical decisions about recommending treatment.

The above vignette provides the reader with a common situation encountered frequently by clinicians, the need for additional, high-quality evidence from the scientific literature to assist them in their clinical decision making. In this mode, clinicians are consumers of the scientific literature as opposed to producers of science; consequently, they need a broad understanding of research methods and designs so that they can properly interpret the scientific basis for clinical practice. Whether orthodontics or any area of medicine is a science is debatable because the nature of the problems addressed by medical and dental care draws on ethics, culture, and economics in a way not commonly found in chemistry, physics, and biology. Nevertheless, as with all of biomedicine, orthodontics can thank empirical research for helping to refine and optimize contemporary approaches to patient care. The research underlying clinical practice ranges from basic sciences, such as genetics and physiology, to social sciences, such as psychology and sociology. All of these clinical evaluative sciences inform clinical practice, and all are fundamentally derived from the same overarching scientific process or method. At its best, research helps to improve the quality of care and patient outcomes, but when the science is poor or misunderstood, its misapplication can lead to just the opposite result. Hence, understanding the elements of good research and what makes science important to clinical practice is needed as a basis for clinical care. This chapter is designed to aid in this understanding.

The Scientific Method

The scientific method is, in fact, part of a broader area of philosophy known as epistemology. Epistemology is the branch of philosophy that deals with the nature of and limits to human knowledge (Salmon et al. 1992). A proper discussion of epistemology and the philosophy of science are well beyond the scope of this chapter. Suffice is to say that our concern in clinical practice is to have the best “knowledge” available to help our patients. There are many ways of humans “knowing” something, including intuition, faith, reason, authority, testimony, personal experience, and science. The distinction of importance here is between belief (I think something is true) and knowledge (something is actually true). Arguably, then, of all the ways we have of knowing something, the scientific method provides us with the best approach if our goal is obtaining objective, valid, and useful information.

Science pursues knowledge by essentially asking and then answering questions. Simple enough. But the devil is in the details. The veracity of the information generated by this process is entirely dependent on the rigor and objectivity employed in how one seeks out the information to answer the question. Moreover, the specific approach to answering the question, that is, the research design, places inherent limits on the conclusions (answers) that can be made. This chapter provides a brief overview of basic research development, the common clinical research designs, their uses, strengths, and limitations, and a discussion of best practices that apply broadly to any research endeavor. The intent is to provide a broad overview framed in terms related to clinical orthodontics.

Developing a Hypothesis

Although it is seemingly straightforward, asking the right question is key to moving science forward. The questions of science are derived from many sources, including intuition, clinical experience, and reading the scientific literature.

Any question that is focused on naturalistic answers (as opposed to metaphysical answers) is fair game for science. Some questions only serve to satisfy the questioner’s curiosity, whereas other questions are the motivators that advance a scientific discipline. The degree to which a question is framed to address a gap in our general knowledge of a subject is the degree to which a question serves to motivate research and move science forward. These are questions that focus us on those areas that lie just beyond our current understanding of how things work. Consequently, science tends to move forward incrementally by constantly working at the frontier of our current understanding and carefully taking the next logical step forward. Scientists (and clinicians) working in a field generally know where that boundary is between current knowledge and our need for new information, and it is this knowledge that allows them to create new questions that lead to the research that advances the field.

Dr. Jones in the vignette has implicitly asked a question that derives from her clinical experience with her new patient population: Can early orthodontic treatment reduce or prevent the need for additional treatment later in adolescence?

Based on one’s experience in an area, it is possible to offer a prediction of what the answer to a question might be. In science this provisional answer is referred to as a hypothesis. From the above example, Dr. Jones might hypothesize that early treatment will, in fact, reduce the need for later treatment for a substantial number of her patients. Any orthodontist understands this question, and most would have an opinion about the answer. In contrast, for naive individuals (i.e., non-dentists), not only would they not have an answer to this question, they also would be very unlikely to think of the question.

When asking a question about treatment outcomes, one is essentially asking about causality. Does treatment A cause outcome B? One of the fundamental goals of clinical research is to establish causality. In so doing we improve our understanding of underlying mechanisms and we provide an opportunity to design clinical interventions aimed at improving the quality of clinical care. In our example, Dr. Jones wishes to know if early treatment is causally related to subsequent occlusal status (and hence the need for additional treatment).

An important concept that underlies the notion of causality in clinical research is that most associations in biomedicine are probabilistic (stochastic) rather than deterministic. This means that at the level of clinically measured outcomes, the likelihood that some outcome will occur as the result of some exposure is not a certainty. For example, if someone is a life-long smoker, they are more likely to experience some sort of lung or heart problem than a nonsmoker. Not all smokers experience lung or heart problems, and some nonsmokers indeed develop these conditions, but smoking certainly increases one’s chances of developing these problems. Consequently, assessing causality in probabilistic systems is challenging and requires an understanding of statistics and research methods. Moreover, this implies that the research must occur in populations (groups) of individuals (patients) as we are often attempting to detect only slight changes in the marginal likelihood of an outcome.

There is a rich philosophy underlying the notion of establishing causality that goes beyond the scope of this chapter. However, the philosophical discussion of causality can often be immobilizing when there is a pragmatic need to move forward with clinical decision making. Fortunately, there are well-regarded heuristic criteria that are considered, when present, to strongly suggest a causal association. Some of the criteria most widely used are guidelines first put forward in 1965 (Hill 1965) by Sir Austin Bradford Hill (1897–1991), a British medical statistician, as a way of evaluating the existence of a causal link between specific factors. He wished to avoid the philosophical and semantic problems often encountered in discussions of causality and rather move to the pragmatic situation in which those aspects of an association that, if present, would most likely lead to the interpretation of causation (Hill 1965). His “viewpoints” (see Table 2.1) are put forward as suggestions and specifically were not called criteria for estimating causality. With the exception of the temporal association (i.e., the cause must precede the outcome), all of these are conditions that suggest but are not required when making the case for a causal association. It should be noted that Hill is not the only person to suggest such factors, but his are the most widely recognized.

Table 2.1 Hill’s viewpoints on the aspects of an association to be considered when deciding on causality.

Source: Hill (1965).

Hill’s Viewpoint Interpretation
Strength of association The stronger the associations (larger effect size) between the hypothesized causal agent and the effect, the less likely the association has occurred by chance or is due to an extraneous variable (i.e., confounding).
Consistency A relationship when observed repeatedly in different people or under different circumstances increases the likelihood of it being causal.
Specificity An effect is the result of only one cause. In Hill’s day this was considered more important than today.
Temporality It is logically necessary for a cause to precede an effect in time.
Biological gradient This is also known as a dose-response relationship and implies that as the exposure to the causal agent increases, the likelihood of the effect occurring increases.
Plausibility The causation we suspect is biologically plausible. However, Hill acknowledged that what is biologically plausible depends upon the biological knowledge of the day.
Coherence Data should not seriously conflict with the generally known facts of the natural history and biology of the disease.
Experiment Experimental evidence provides the strongest support for a causal hypothesis.
Analogy At times, commonly accepted phenomenon in one area can inform us of similar relationships in another.

Testing a Hypothesis

Testability is the hallmark of a well-structured hypothesis and the foundation for high-quality scientific investigation. Although the philosophy underlying the testing of hypotheses is beyond the scope of this text, the common approach is based on deduction and extends from the work of philosopher Karl Popper. This approach is known as refutation or falsifiability. Falsifiability means that a hypothesis can be shown to be false through observation or experimentation.

To make a hypothesis fully testable, it must go through a process of operationalization. This means that all of the elements of the hypothesis must be specified in such a way that will allow them to be measured. Moreover, it also implies the need for some a priori determination of what constitutes the standard by which the hypothesis will be declared, “falsified.”

Once the hypothesis if fully operationalized, the investigator can then move forward with the empirical investigation, the aim of which is to attempt to falsify his or her hypothesis. If successful in demonstrating that the hypothesis is false, then that hypothesis should be discarded and, ideally, a new hypothesis, benefiting from this new information, created and the process repeated. Failing, through rigorous effort, to demonstrate that a hypothesis is false does not necessarily demonstrate that it is true, but it provides the initial evidence that it may be true.

It is rarely the case that a single study is considered definitive proof of the veracity of a hypothesis. Rather, each experiment (or observational study) done to test a hypothesis provides evidence that supports or refutes the hypothesis. Over time, this so-called weight of evidence accumulated through multiple investigations, often by different investigators, provides a sense of the veracity of the hypothesis. Consequently, most knowledge created through the scientific process is considered provisional. Some say that hypotheses should not be defined as true or false but rather as useful or not useful in accurately predicting outcomes.

In the example above, Dr. Jones as an orthodontist in full-time private practice would not likely address her desire to know more about the association between early treatment and its effect on later treatment need through her own research efforts. Rather she would likely search for publications where this issue has been studied. Her ability to understand the elements that go into creating high quality clinical research and what types of research designs are used to test various types of hypotheses will give her the knowledge necessary to select and critically evaluate appropriate publications for consideration.

Research Quality Issues

Even the casual student of science appreciates that science demands carefully constructed and objective processes be used in generating information (data) to test (falsify) hypotheses. All well-designed clinical research shares common features that serve to reduce bias and ensure valid findings. These features are mentioned in brief here, and interested readers can find more detailed information in the recommended readings at the end of the chapter.

Measurement Issues

Accurate measurement is a hallmark of good science. Poorly selected or designed measures lead inevitably to the inability to properly test a hypothesis and ultimately to spurious results. Thus, great care is required when operationalizing a hypothesis to ensure that all of the important elements of the hypothesis can be measured in a valid and reliable manner. In the example, the notion of malocclusion needs to be defined—a case definition. This should include a detailed definition of what elements (e.g., overjet, overbite, ANB, etc.) will be included and exactly how they will be measured. Similarly, “early treatment” will need to be defined in terms of age, duration, forces, and appliances to be used./>

Only gold members can continue reading. Log In or Register to continue

Stay updated, free dental videos. Join our Telegram channel

Jan 1, 2015 | Posted by in Orthodontics | Comments Off on 2 Clinical Research Designs

VIDEdental - Online dental courses

Get VIDEdental app for watching clinical videos