In this article, I will provide a short review of the fundamentals of a randomized controlled trial (RCT) before I move on to observational studies. It will be in a checklist format and can be used as a guide when an RCT is planned.
First, we need to define a clear research question. Ideally, a systematic review on the topic of interest should be the starting point because it will allow clarifying existing knowledge and controversies. An RCT is considered appropriate and ethical when there is “equipoise” regarding the question of interest. Equipoise means that there is genuine uncertainty in the scientific community regarding the effect of the intervention of interest. If we already know the effect of the intervention, it is considered unethical to plan a trial for which the answer is already known, since it would not be appropriate to impose a group of patients to a potentially inferior therapy or no therapy.
Once we decide to proceed with the trial, the key areas that need attention are the size of the trial, the use of a control, measures to reduce bias, and proper reporting ( Table 1 ).
|Start with a systematic review.|
|Identify unanswered questions.|
|Decide what is a clinically important difference to be detected.|
|Calculate the required sample size.|
|Consider efficient designs (factorial and split-mouth, if applicable).|
|Is it feasible? Is it ethical?|
|Use an appropriate and concurrent control.|
|Use standard therapy, if available, rather than no therapy.|
|Randomization (random number generation, allocation concealment).|
|Blinding of participants, investigators, outcome assessors, data analysts (if feasible).|
|Equal treatment of groups.|
|Guard against losses to follow-up.|
|Use proper statistical methods.|
|Prespecify subgroup analyses, prefer interaction tests.|
|Use the CONSORT guidelines.|
|Report all primary and secondary outcomes and not only interesting results.|
|Report study limitations and applicability of results to other settings (generalizability).|
The objective of a clinical trial is to provide reliable evidence regarding the effect or no effect of a treatment modality. A sufficient number of participants allows detection of a difference with reasonable precision (good power) if a difference exists, or allows reasonable certainty that no difference exists if the results do not indicate an effect. Small studies tend to be less convincing and inconclusive because they often have low power. Recruiting more patients than necessary is a waste of resources and even unethical, since more patients than necessary might be exposed to a potentially ineffective therapy. There is a close relationship between power and sample size; usually, as the sample size increases, the study power is also expected to increase. Ideally, a balance between study power, a clinically important difference to be detected, trial feasibility, and credibility are required. Before we proceed with the sample calculation, we need to define the parameters shown in Table II . Power calculations should be considered at the design stage; they have limited or no value after the trial is conducted. After data analysis is complete, power is assessed by looking at the confidence intervals of the estimates. Narrow confidence intervals indicate high power and precision, and vice versa.