Various statistical approaches and visual tools have been developed to detect, estimate, and evaluate the impact of publication bias in meta-analysis results. In this article, we present the most popular statistical methods and graphic tools to address publication bias using an example.
Comparing fixed-effect and random-effects results
The fixed-effect model is known to favor larger studies; hence, it assigns greater weights to these studies. By contrast, the random-effects model aims to balance the weights more evenly across small and large studies. With substantial small-study effects, where an intervention seems to be more beneficial in smaller studies, the random-effects summary effect size will present the intervention as being more beneficial than the fixed-effect summary effect size.
The small-study effects issue is one of many factors responsible for heterogeneity in the meta-analysis results. This means that if an intervention seems to be more beneficial under the random-effects model than under the fixed-effect model, the researchers should investigate further whether they should attribute this difference to the small-study effects alone (the intervention was more effective in smaller studies) or to other study characteristics.
The forest plot in Figure 1 displays the meta-analysis results on the effectiveness of fluoride gel against placebo for preventing dental caries in children and adolescents under the random-effects and fixed-effect models. The point estimates differ very slightly, and the confidence intervals overlap perfectly. Note how much wider the confidence interval under the random-effects model is compared with the confidence interval under the fixed-effect model. The similarity of the results is an indication of the possible low impact of small-study effects. Forest plots provide only a visual exploration; hence, further investigation is required (eg, funnel plot and proper statistical methods) to determine any small-study effects and possible publication bias.
Funnel plot for small-study effects and publication bias
Another graphical tool to investigate the relationship between study size and effect size is the funnel plot. The funnel plot is a scatter plot in which the effect sizes are plotted on the x-axis and the standard errors of the effect sizes on the y-axis. The spread of the points creates a pattern like a funnel. In the funnel plot, the points corresponding to studies with smaller sample size are scattered on the bottom of the funnel (because they yield effects with larger standard errors), and points corresponding to studies with larger sample size are scattered in a narrow range of values at the top of the funnel (because they yield effects with smaller standard errors). Instead of standard errors, we could have used the sample size of the studies or the variance of the effect sizes. However, only the standard errors can spread out the points on the bottom of the funnel where the smaller studies are found and create a funnel-like pattern.
To determine whether there is publication bias or small-study effects, we need to understand how the points are distributed. The symmetrical distribution of the points about the summary effect size is an indication of the absence of possible small-study effect or publication bias. However, any asymmetrical distribution of the points may support the presence of possible small-study effect or publication bias. , The typical pattern in the presence of small-study effects is a prominent asymmetry at the bottom that progressively disappears as we move up to larger studies.
Figure 2 illustrates the funnel plot of our example. The effect sizes have been estimated using the fixed-effect model. The black line displays the summary effect size, and the red dotted line refers to no effect. The diagonal lines represent the pseudo 95% confidence limits around the summary effect size for each standard error on the vertical axis. In the absence of heterogeneity, 95% of the studies should be scattered within the funnel as defined by these diagonal lines. The asymmetry of the funnel plot is evident; toward the bottom of the plot, there is only 1 small study, whereas the majority of the studies are scattered above the middle of the funnel plot. Four studies are outside the pseudo 95% confidence limits. The absence of smaller studies (equivalently, only 1 small study) at the bottom of the plot and studies with small effects (on the right side of the plot) are strong indications of possible publication bias in the meta-analysis results. In this case, we suspect that the summary effect may be biased. However, the absence of smaller studies might also indicate that the effectiveness of fluoride gel was likely to be investigated mainly in moderate and large studies. Therefore, publication bias cannot be perceived as the only cause of funnel asymmetry.