Implementation of a fixed-effect model implies that 2 conditions are met. First, we are confident about the similarity of all studies included in the meta-analysis (clinical homogeneity in terms of patient characteristics, implementation of the interventions, design, and conduct of the studies) and that it is sensible to synthesize the information. Second, we are interested in estimating a common effect size that is generalized strictly only for the population included in the meta-analysis.
However, in practice, those conditions rarely hold so that a fixed-effect model can be applied. In the real-world synthesis, the effect size varies from study to study, because studies differ in the mixes of participants and the implementation of interventions, among other reasons. Therefore, in most meta-analyses, the implementation of a random-effects model seems more appropriate, because it accounts for the inherent diversity of the studies, allows the results to be generalized to a wider population, and yields identical results with the fixed-effect model when heterogeneity is trivial.
The weighting scheme of the models
Comparing the weighting scheme of these 2 models, we understand that when we move from the fixed-effect model to the random-effects model, larger studies tend to lose influence, and smaller studies tend to gain influence. This means that if larger effect sizes correspond to larger trials, then the summary effect size will be larger under the fixed-effect model (information is largely used by these trials) than under the random-effects model. In contrast, if larger effect sizes correspond to smaller trials, then the summary effect size will be smaller under the fixed-effect model (information is largely ignored by these trials) than under the random-effects model. If heterogeneity is not zero, the estimate of the summary means will usually differ between the models because of the different weighting schemes applied.
The selection of the weighting scheme shall be oriented by the goals of the analysis. When the goal is to estimate 1 true effect size, we assume that only the within-study variance has an impact on the effect size; hence, the weighting scheme of the fixed-effect model seems appropriate. By contrast, when we estimate the mean of the effect sizes in a range of studies, we must account in the summary effect size for the information from all studies and not discount the smaller ones by giving them a very small weight. In this case, the weighting scheme of a random-effects model should be used.
Presentation of the results
When performing a meta-analysis, we focus mainly on reporting the summary effect size and its variance. Under trivial heterogeneity, the fixed-effect model might be implemented, and we focus on presenting the common effect size and its uncertainty. However, under substantial heterogeneity, the random-effects model is considered appropriate, and we should focus mainly on the uncertainty of the mean. For instance, if an intervention reduces dental caries in some studies but increases it in others, apparently, we should shift our focus away from the mean effect and toward its uncertainty and try to identify the reasons that might explain the dispersion of the effect sizes across the studies.
The selection of the model is usually associated with the amount of heterogeneity in the meta-analysis. Trivial heterogeneity is an indication that the fixed-effect model can be safely applied. In contrast, the random-effects model is driven by nontrivial heterogeneity. For example, if an intervention is more effective in adolescents than in adults, then the heterogeneity might be substantial. Generally, differences in the characteristics, design, and conduct of the studies might explain the heterogeneity.
However, sometimes when the random-effects model seems appropriate, the number of studies is too small to estimate the heterogeneity precisely. In this case, we cannot draw reliable conclusions about the summary effect size and its uncertainty; thus, we should choose between the following options to present the results:
Apply random-effects meta-analysis and report the within-study effect sizes instead of the summary effect.
Perform a random-effects meta-analysis in a Bayesian framework where the heterogeneity is estimated also by using information outside the current set of studies (known as prior information).
The problem with the first option is that readers might draw conclusions using inappropriate approaches such as counting the numbers of significant and insignificant effect sizes. The second option is considered ideal, especially for a meta-analysis with few trials, but only a few reviewers are familiar with Bayesian meta-analysis.