Every meta-analysis rests on an implicit assumption: that no single study is so dominant that removing it would overturn the conclusions. Leave-one-out sensitivity analysis tests that assumption directly. By systematically omitting one study at a time and re-estimating the pooled effect, you produce a clear picture of how much each study drives your result.

When the pooled effect stays stable across all iterations, your conclusion is robust. When one omission shifts the estimate dramatically, you have an influential study that demands explanation.

What Makes a Study Influential

Three mechanisms create influential studies:

Precision leverage occurs when a study has an exceptionally small variance, giving it high statistical weight.

Effect size discordance occurs when a study's point estimate is far from the other studies in the pool.

Heterogeneity contribution is more subtle: a study can inflate tau-squared, changing how all weights are distributed.

Try our free Sensitivity Analysis Tool to identify all three types of influential studies instantly.

How Leave-One-Out Analysis Works

For a meta-analysis with k studies: remove study 1, re-fit the model on k-1 studies, record the pooled effect, confidence interval, p-value, and heterogeneity statistics. Restore study 1, remove study 2. Continue until every study has been omitted once.

The output is a table or forest-style plot showing k rows of pooled estimates, each representing the analysis with one study excluded.

Running the Analysis with Our Free Tool

Navigate to the Sensitivity Analysis Tool and enter your study labels, effect sizes, and standard errors. The tool runs the full leave-one-out procedure automatically and displays a leave-one-out forest plot with color coding for studies where exclusion changes significance.

The one-click export provides the equivalent R code using the leave1out() function from the metafor package.

Interpreting Results

When Results Are Robust

If all k pooled estimates cluster tightly around the original estimate and confidence intervals consistently include or exclude zero across all iterations, your conclusion is robust. Report this explicitly.

When One Study Changes Significance

Investigate the influential study along four dimensions: sample size and population, methodological quality, effect size and direction, and publication context.

After investigation, present both the full-model and leave-one-out estimates as co-primary results, or conduct a subgroup analysis separating the outlying study.

Use the Forest Plot Generator to visualize the study's position relative to the rest of the pool.

When Multiple Studies Are Influential

If removing several different studies each changes significance, the meta-analysis has a fragility problem. Report it honestly and recommend further primary research.

Significance Change Detection

Two criteria: a clinically meaningful shift in the point estimate (a change exceeding the minimally important difference), and a change in statistical significance (confidence interval changes from excluding to including the null).

The Funnel Plot Generator helps identify outliers before running leave-one-out analysis.

Reporting in Methods and Results

In methods: state that leave-one-out sensitivity analysis was planned a priori. In results: include a summary table of leave-one-out estimates and identify influential studies. In discussion: address what the results imply for evidence certainty.

Key Takeaways

FAQ

How many studies do I need before leave-one-out analysis is meaningful?

The analysis becomes more informative with eight or more studies. Below five, report individual study estimates alongside the pooled result instead.

Should I use a fixed-effect or random-effects model for leave-one-out analysis?

Use the same model as your primary analysis. Switching models confounds the influence of the excluded study with the model change effect.

What is the difference between leave-one-out analysis and outlier detection?

Leave-one-out focuses on the pooled estimate: do conclusions change when studies are excluded? Outlier detection focuses on whether individual effects are statistically extreme. They answer related but distinct questions.

If one study is influential, should I exclude it?

Not automatically. Excluding solely because of influence introduces selection bias. Investigate why it is influential, assess quality, then present both results.

How do I report leave-one-out results in a PRISMA-compliant review?

PRISMA 2020 item 16 addresses sensitivity analyses. Describe the procedure, software, and criteria for meaningful change in your methods. Report the range of estimates and identify influential studies in results.

Need help with your systematic review or meta-analysis? Get a free quote from our team of PhD researchers.