Assess the robustness of your meta-analysis to publication bias using Vevea-Hedges weight-function selection models (Vevea and Hedges, 1995). The tool compares the unadjusted random-effects estimate against selection-adjusted estimates across moderate and severe one-tailed and two-tailed weight patterns. Export R code for weightr, D3.js visualizations, and an auto-generated methods paragraph.
Move data between tools automatically. Compute effect sizes, then send results to Forest Plot, Funnel Plot, or Heterogeneity analysis with one click.
No data in pipeline yet. Compute effect sizes or convert data in any tool, then send it downstream.
Load sample data to see how the tool works, or clear all fields to start fresh.
Drag & drop a file or
CSV, TSV, Excel (.xlsx/.xls) - max 500 rows
| Study | Effect Size | Standard Error | |
|---|---|---|---|
Input the effect size and standard error for each study in your meta-analysis. You can type values directly, paste from a spreadsheet, import a CSV/Excel file, or receive data from the analysis pipeline (effect size calculator, forest plot, or funnel plot tools).
The tool automatically computes the standard DerSimonian-Laird random-effects pooled estimate with its 95% confidence interval and between-study variance (tau-squared). This serves as the baseline for comparison.
The tool applies four Vevea-Hedges step-function weight patterns (moderate one-tailed, severe one-tailed, moderate two-tailed, severe two-tailed) and computes adjusted estimates for each. The comparison table shows the absolute and percentage change from the unadjusted estimate.
If the adjusted estimates remain within 10% of the unadjusted estimate across all weight patterns, the finding is considered robust to publication bias. Changes exceeding 20% indicate meaningful sensitivity to selection, and conclusions should be interpreted with caution.
Examine the D3.js comparison forest plot showing all estimates side by side, and the weight function bar chart illustrating how selection weights decrease for non-significant p-value intervals under the severe one-tailed model.
Download plots as PNG or PDF. Copy reproducible R code for the weightr package. Copy the auto-generated methods paragraph for your manuscript. Export the full comparison table as CSV.
Need this done professionally? Get a complete systematic review or meta-analysis handled end-to-end.
Get a Free QuoteUnlike funnel plots and regression tests that detect asymmetry as a proxy for publication bias, selection models directly specify the probability of publication as a function of p-value. This makes the assumptions transparent and testable, and allows for adjustment of the pooled estimate under specific selection scenarios.
Running the analysis under moderate and severe, one-tailed and two-tailed weight patterns provides a thorough sensitivity analysis. If the conclusions remain stable across all four patterns, you can report with greater confidence that the finding is robust to potential publication bias. If conclusions change under only the most extreme pattern, the evidence is still relatively strong.
When the adjusted estimates differ from the unadjusted estimate by less than 10% across all weight patterns, the meta-analytic conclusion is considered robust to publication bias under the Vevea-Hedges framework. Changes between 10% and 20% warrant cautious interpretation, and changes exceeding 20% indicate meaningful sensitivity to selection assumptions.
Selection models work best as one component of a comprehensive publication bias assessment. Combine with funnel plot visual inspection, Egger's regression test, trim-and-fill analysis, p-curve analysis, and the Doi plot with LFK index. Agreement across methods strengthens confidence in the conclusions about the presence or absence of publication bias.
The choice of weight pattern should be informed by knowledge about publication practices in the relevant research field. In some fields, two-tailed selection (where any significant result is favored) may be more realistic than one-tailed selection. In clinical trials, where positive results are strongly favored, one-tailed severe selection may be the most appropriate scenario to test.
The R code generated by this tool uses the weightr package (Coburn and Vevea, 2019), which provides the full maximum-likelihood implementation of the Vevea-Hedges selection model including likelihood ratio tests for selection. The browser-based tool provides a quick sensitivity analysis, while the R code enables the complete statistical treatment with formal hypothesis testing.
Publication bias occurs when the publication of research findings depends on the nature and direction of results. Studies with statistically significant or favorable results are more likely to be published, creating a systematic overestimation of effects in meta-analyses that rely only on published literature. Selection models address this problem by explicitly modeling the selection process and adjusting the pooled estimate accordingly (Hedges, 1984; Iyengar and Greenhouse, 1988; Vevea and Hedges, 1995).
The Vevea-Hedges weight-function model (Vevea and Hedges, 1995) uses a step-function to assign selection weights to studies based on their p-value. The p-value range (0 to 1) is divided into intervals, and each interval receives a weight that represents the relative probability of publication for studies with p-values in that interval. Studies with significant p-values (below 0.05) typically receive a weight of 1.0, while studies with non-significant p-values receive progressively lower weights. The model then re-estimates the pooled effect using these weighted likelihood contributions.
This approach differs fundamentally from trim-and-fill (Duval and Tweedie, 2000), which adjusts for publication bias by imputing missing studies to make the funnel plot symmetric. Trim-and-fill operates on the assumption that asymmetry reflects missing studies, but it cannot model the selection mechanism explicitly. Selection models are more flexible because they can accommodate different selection patterns (one-tailed vs. two-tailed, moderate vs. severe) and provide formal likelihood ratio tests for the presence of selection.
One-tailed selection models assume that selection operates in one direction: studies reporting positive significant results (typically p < 0.025 one-tailed, corresponding to p < 0.05 two-tailed for a positive effect) are most likely to be published. Two-tailed selection models assume that any significant result (positive or negative) is more likely to be published than a non-significant result. The choice between these models depends on the research context. In fields where results in either direction are publishable (e.g., basic psychology), two-tailed selection may be more appropriate. In clinical research where positive treatment effects drive publication, one-tailed selection is more realistic.
The sensitivity analysis approach implemented in this tool follows recommendations by Vevea and Woods (2005), who proposed using a priori weight functions representing plausible selection scenarios rather than estimating the weights from the data. This approach avoids the problem of overfitting weight parameters in small meta-analyses and provides interpretable results: if the pooled estimate changes substantially under a plausible selection scenario, the conclusion is sensitive to publication bias.
For a comprehensive publication bias assessment, combine this tool with our funnel plot and publication bias tool for visual inspection, Egger's test, and trim-and-fill. Evaluate evidential value with our p-curve analysis tool. Detect asymmetry in small meta-analyses with our Doi plot generator and LFK index. Visualize your primary meta-analysis results with the forest plot generator.
Selection models (also called weight-function models) are statistical methods that explicitly model the process by which studies are selected for publication based on their results. Unlike methods that detect publication bias indirectly through asymmetry (funnel plots, Egger's test), selection models directly estimate the probability that a study would be published as a function of its p-value. By specifying how studies with non-significant results are differentially suppressed, selection models produce adjusted pooled estimates that account for the missing studies.
The Vevea-Hedges model (Vevea and Hedges, 1995) uses a step-function to assign selection weights based on p-value intervals. Studies with significant p-values (e.g., below 0.05) receive a weight of 1.0, meaning they are assumed to be published with certainty. Studies with non-significant p-values receive reduced weights that decrease in steps as the p-value increases. The model then re-estimates the pooled effect using these weighted studies, producing an adjusted estimate that reflects what the meta-analysis would show if non-significant studies had the same probability of publication as significant ones.
Trim-and-fill (Duval and Tweedie, 2000) is a non-parametric method that identifies missing studies from funnel plot asymmetry and imputes them to produce a symmetrical plot. It does not model the selection mechanism explicitly. The Vevea-Hedges model explicitly specifies how selection operates through weight functions, making it more transparent about its assumptions. Trim-and-fill can only adjust for asymmetry, while selection models can adjust for selection even when the funnel plot appears symmetric (e.g., when both positive and negative significant results are favored over null results). Selection models generally have better statistical properties than trim-and-fill for detecting and adjusting for publication bias.
Selection models are most useful as a sensitivity analysis to assess how robust your meta-analytic conclusions are to different assumptions about publication bias. They are particularly valuable when you have reason to believe that the publication process may have favored significant results, which is common in most research fields. Best practice is to run selection models with multiple weight patterns (moderate and severe, one-tailed and two-tailed) to see whether the adjusted estimates are substantially different from the unadjusted estimate. If the conclusions remain stable across patterns, the finding is considered robust to potential publication bias.
The choice of weight functions should reflect plausible selection scenarios in your research domain. One-tailed models assume that selection operates only in one direction (e.g., only positive significant results are favored). Two-tailed models assume that any significant result (positive or negative) is more likely to be published than non-significant results. Moderate weight patterns represent a scenario where non-significant studies are somewhat less likely to be published, while severe patterns represent a scenario where non-significant studies are heavily suppressed. Running all four combinations (moderate/severe crossed with one-tailed/two-tailed) provides a thorough sensitivity analysis.
Selection models make explicit assumptions about the selection mechanism, and if these assumptions are wrong, the adjusted estimates may be biased. The step-function approach can be sensitive to the choice of p-value cutoffs and weight values. With small numbers of studies (fewer than 10), the adjusted estimates can be unstable. Selection models also assume that the effect sizes and standard errors are correctly specified, and they cannot distinguish publication bias from other sources of small-study effects such as genuine heterogeneity or methodological differences between small and large studies.
Detect publication bias with contour-enhanced funnel plots, Egger's test, and trim-and-fill using our funnel plot and publication bias tool. Assess asymmetry in small meta-analyses with our Doi plot generator and LFK index. Evaluate evidential value with our p-curve analysis tool. Visualize pooled results with our forest plot generator for meta-analysis.
Reviewed by
Dr. Sarah Mitchell holds a PhD in Biostatistics from Johns Hopkins Bloomberg School of Public Health and has over 15 years of experience in systematic review methodology and meta-analysis. She has authored or co-authored 40+ peer-reviewed publications in journals including the Journal of Clinical Epidemiology, BMC Medical Research Methodology, and Research Synthesis Methods. A former Cochrane Review Group statistician and current editorial board member of Systematic Reviews, Dr. Mitchell has supervised 200+ evidence synthesis projects across clinical medicine, public health, and social sciences. She reviews all Research Gold tools to ensure statistical accuracy and compliance with Cochrane Handbook and PRISMA 2020 standards.
Whether you have data that needs writing up, a thesis deadline approaching, or a full study to run from scratch, we handle it. Average turnaround: 2-4 weeks.