Research Methods 101: Meta-Analyses

This post is part of NPC's series on research methods.

This post is part of NPC's series on research methods.

Not infrequently, results for the same intervention differ across clinical trials and it may not be clear whether one therapy provides more benefit than another. As CER increases and more studies are conducted, clinicians and policymakers are more likely to encounter this scenario. In a systematic review, a researcher identifies similar studies and displays their results in a table, enabling qualitative comparisons across the studies. With a meta-analysis, the data from included studies are statistically combined into a single "result." Merging the data from a number of studies increases the effective sample size of the investigation, providing a statistically stronger conclusion about the body of research. By so doing, investigators may detect low frequency events and demonstrate more subtle distinctions between therapeutic alternatives.

When studies have been properly identified and combined, the meta-analysis produces a summary estimate of the findings and a confidence interval that can serve as a benchmark in medical opinion and practice. However, when done incorrectly, the quantitative and statistical analysis can create impressive "numbers" but biased results. The following are important criteria for properly conducted meta-analyses:

  1. Carefully defining unbiased inclusion or exclusion criteria for study selection
  2. Including only those studies that have similar design elements, such as patient population, drug regimen, outcomes being assessed, and timeframe
  3. Applying correct statistical methods to combine and analyze the data reporting this information is essential for the reader to determine whether the data were suitable to combine, and if the meta-analysis draws unbiased conclusions

Meta-analyses of randomized clinical trials are considered to be the highest level of medical evidence as they are based upon a synthesis of rigorously controlled trials that systematically reduce bias and confounding. This technique is useful in summarizing available evidence and will likely become more common in the era of publicly funded comparative effectiveness research.

Article adapted from NPC’s Demystifying Comparative Effectiveness Research: A Case Study Learning Guide.