Test the robustness of your meta-analysis pooled estimate with a leave-one-out forest plot. Remove each study one at a time, choose your confidence level (80/90/95/99%), customize font sizes, auto-generate a publication-ready methods paragraph, and export as high-resolution PNG, SVG, or copy directly to your clipboard.
Load sample data to see how the tool works, or clear all fields to start fresh.
| Study | Effect | CI Lower | CI Upper | |
|---|---|---|---|---|
Drag & drop a file or
CSV, TSV, Excel (.xlsx/.xls) - max 500 rows
Import studies from a spreadsheet. Expected columns: study (name), effect (effect size), cilower (CI lower bound), ciupper (CI upper bound).
Add each study with its name, effect size, and confidence interval bounds. Use the example button to load sample data and see the expected format.
Select fixed-effect (inverse-variance) or random-effects (DerSimonian-Laird) model. Pick a confidence level of 80%, 90%, 95%, or 99% to control interval width across iterations.
The tool re-runs the meta-analysis removing each study one at a time and renders a forest-style influence plot. Compare each row to the overall pooled estimate to identify influential studies. Use the font size slider to adjust text legibility.
Download the plot as a high-resolution PNG (3x), SVG vector, or copy it to your clipboard in one click. Use the auto-generated methods paragraph directly in your manuscript.
Need this done professionally? Get a complete sensitivity and robustness analysis for your review.
Get a Free QuoteIf the pooled estimate and its significance remain stable when any single study is removed, the meta-analysis conclusion is robust. The forest-style influence plot makes this visual comparison immediate. Document this stability explicitly in your results section, as reviewers and editors look for it.
A study that substantially shifts the pooled estimate when removed may simply be the largest or most precise study. Influence is not the same as bias. Investigate why a study is influential (sample size, population, methodology) rather than automatically excluding it.
Switching between 80%, 90%, 95%, and 99% confidence levels reveals whether borderline results depend on the chosen coverage. If the interval crosses the null at 95% but not at 90%, reviewers will want to see that reported. Use the confidence level selector to check all four in seconds.
Use the auto-generated methods paragraph to describe your sensitivity analysis procedure in PRISMA-compliant language. Export the influence plot as a high-resolution 3x PNG for journal submission, SVG for vector-quality figures, or copy it to your clipboard for quick insertion into documents and slides. Adjust font sizes with the slider before exporting to ensure labels are legible at the target dimensions.
Sensitivity analysis is a core component of rigorous meta-analysis, required by both PRISMA 2020 reporting guidelines (Page et al., 2021) and the Cochrane Handbook for Systematic Reviews of Interventions (Higgins et al., 2023, Chapter 10). A sensitivity analysis meta-analysis examines whether analytical decisions, such as which studies to include, which statistical model to use, or how to handle missing data, materially change the pooled result. When the conclusion remains stable across all tested scenarios, the evidence is considered robust. This tool now includes a confidence level selector (80%, 90%, 95%, 99%) so you can test robustness at multiple coverage thresholds without re-entering data.
The leave-one-out analysis tool implements the most common form of sensitivity analysis: sequential omission. Each study is removed one at a time, and the meta-analysis is re-computed with the remaining k minus 1 studies. The results are displayed as a leave-one-out forest plot, a forest-style visualization where each row represents one iteration and includes the re-computed point estimate and confidence interval. A dashed reference line marks the overall pooled effect so you can instantly spot rows that deviate. A study is considered influential when its removal causes the pooled effect to cross the null line (changing statistical significance), shifts the point estimate by more than 10 to 20 percent, or substantially alters the I-squared heterogeneity statistic. The Baujat plot provides a complementary influence diagnostic by plotting each study's contribution to the overall Q heterogeneity statistic against its influence on the pooled result, and the Galbraith (radial) plot offers an alternative visualization where outlier studies appear as points falling outside the confidence band around the regression line.
This meta-analysis robustness check complements other sensitivity approaches described in the Cochrane Handbook. GOSH (Graphical Overview of Study Heterogeneity) analysis extends the leave-one-out logic by computing the pooled estimate for every possible subset of studies, producing a scatter plot that reveals distinct clusters corresponding to different underlying subpopulations. Cook's distance, adapted from regression diagnostics to the meta-analytic context, quantifies each study's overall influence on the pooled estimate and its variance in a single summary measure. Pre-specifying these sensitivity analyses in your PROSPERO protocol before data extraction begins strengthens credibility by preventing post-hoc analytic decisions. Fixed-effect versus random-effects model comparison tests whether the variance structure assumption changes the conclusion. Excluding studies at high risk of bias (assessed using RoB 2 for randomized trials or ROBINS-I for non-randomized studies) determines whether methodological quality drives the result. Restricting to studies with directly measured outcomes versus proxy outcomes tests indirectness.
The tool generates a publication-ready methods paragraph that describes the sensitivity analysis procedure, statistical model, confidence level, and key findings in language aligned with PRISMA 2020 reporting standards. Copy this paragraph directly into your manuscript to save time during the writing phase. For visual output, the font size slider lets you scale all plot text before exporting, ensuring labels remain legible whether your figure appears in a journal column, a poster, or a conference slide. Export options include high-resolution PNG at 3x pixel density for print-quality figures, SVG vector export for infinitely scalable graphics, and a one-click copy to clipboard that captures the plot as a high-resolution PNG for instant pasting into documents or collaborative editing tools.
The sensitivity analysis workflow connects directly to other meta-analytic assessments. Visualize the full set of study results using our forest plot generator to identify potential outliers before running leave-one-out analysis. Assess whether missing studies may bias the pooled estimate using our funnel plot and publication bias tool with Egger's regression test. If high heterogeneity persists across leave-one-out iterations, explore moderator variables using our meta-regression data formatter. Rate the overall certainty of your evidence, incorporating sensitivity analysis findings, with the GRADE certainty of evidence assessment tool.
Leave-one-out sensitivity analysis is a method for assessing the influence of individual studies on the pooled meta-analysis result. Each study is removed one at a time, and the meta-analysis is re-run with the remaining studies. If the pooled estimate changes substantially when a particular study is removed, that study is considered influential. This tool displays the results as a forest-style influence plot so you can visually compare each iteration against the overall pooled estimate. The technique helps identify studies that disproportionately drive the overall result and informs decisions about the robustness of the meta-analytic conclusion.
Sensitivity analysis should be performed routinely in every meta-analysis, as recommended by Cochrane and PRISMA guidelines. It is especially important when: (1) studies differ in quality or risk of bias, (2) there is substantial heterogeneity (high I-squared), (3) one or more studies are outliers, (4) studies differ in design (e.g., RCTs mixed with observational studies), or (5) there are concerns about the influence of a single large study. Pre-specifying sensitivity analyses in your protocol strengthens the credibility of the review.
The forest-style influence plot displays each 'study-removed' pooled estimate as a row. Compare each row to the overall pooled estimate (shown as a dashed reference line or diamond). If removing a single study causes the pooled effect to cross the null line, or if the point estimate shifts by more than 10 to 20 percent, that study is influential. You can adjust the confidence level (80%, 90%, 95%, or 99%) to see how interval width changes across iterations. A robust pooled estimate should remain relatively stable regardless of which study is removed.
The tool supports four confidence levels: 80%, 90%, 95%, and 99%. The default is 95%, which is the standard in most systematic reviews and meta-analyses. Selecting a narrower interval (80% or 90%) can highlight subtle shifts in the pooled estimate that are masked at wider coverage, while a 99% interval is useful when you need to demonstrate extreme robustness. Switch between levels instantly to see how your leave-one-out results respond.
Sensitivity analysis tests the robustness of the overall result by varying analytical decisions (e.g., removing studies, changing the model, excluding high-risk-of-bias studies). Subgroup analysis tests whether the effect differs across predefined groups (e.g., by age, intervention dose, or study design). Sensitivity analyses answer 'Is the result robust?' while subgroup analyses answer 'Does the effect vary by group?' Both are important but serve different purposes.
Leave-one-out sensitivity analysis is not designed to detect publication bias directly. However, it can reveal if the pooled estimate is driven primarily by one or two small studies with extreme effects, a pattern that could overlap with publication bias. For formal publication bias assessment, use funnel plots with Egger's test or trim-and-fill analysis. Sensitivity analysis complements, but does not replace, dedicated publication bias methods.
The tool offers three export options. You can download a high-resolution PNG at 3x pixel density, which produces publication-quality images suitable for journal submission. You can also export the plot as an SVG vector file, which scales to any size without losing quality and is ideal for journals that request vector graphics. Additionally, you can copy the plot directly to your clipboard as a high-resolution PNG for quick pasting into documents, slides, or collaborative editing tools. The font size slider lets you adjust all plot text before exporting so labels remain legible at your target print or slide dimensions.
After running the leave-one-out analysis, the tool generates a publication-ready methods paragraph that describes the sensitivity analysis procedure, the statistical model used, the confidence level selected, and a summary of the findings. You can copy this text directly into the methods or results section of your manuscript. The paragraph follows standard reporting conventions aligned with PRISMA 2020 guidelines and saves time during the writing phase of your systematic review.
Visualize your full meta-analysis results with our forest plot generator with weighted squares and diamond summary . Assess publication bias using our funnel plot and publication bias tool with Egger's test and trim-and-fill. Calculate individual study effect sizes with our effect size calculator for SMD, OR, and RR. Assess the certainty of your evidence with our GRADE certainty of evidence assessment tool. Extend your robustness analysis with our GOSH plot generator to visualize all possible study subsets and identify heterogeneity clusters. To determine whether your cumulative evidence has reached a conclusive information size, run a trial sequential analysis.
Reviewed by
Dr. Sarah Mitchell holds a PhD in Biostatistics from Johns Hopkins Bloomberg School of Public Health and has over 15 years of experience in systematic review methodology and meta-analysis. She has authored or co-authored 40+ peer-reviewed publications in journals including the Journal of Clinical Epidemiology, BMC Medical Research Methodology, and Research Synthesis Methods. A former Cochrane Review Group statistician and current editorial board member of Systematic Reviews, Dr. Mitchell has supervised 200+ evidence synthesis projects across clinical medicine, public health, and social sciences. She reviews all Research Gold tools to ensure statistical accuracy and compliance with Cochrane Handbook and PRISMA 2020 standards.
Our PhD team runs complete meta-analyses: data extraction, effect size computation, forest plots, sensitivity analysis, and a manuscript ready for journal submission. Average turnaround: 2-4 weeks.