Build interactive network geometry diagrams, validate connectivity, generate league tables, and export formatted data for R netmeta, WinBUGS, or STATA.
Load sample data to see how the tool works, or clear all fields to start fresh.
Drag & drop a file or
CSV, TSV, Excel (.xlsx/.xls) - max 500 rows
List all treatments. Mark one as the reference comparator.
Enter each direct comparison. Effect and SE are optional (for data export).
| Treatment 1 | Treatment 2 | Studies (k) | Effect | SE | |
|---|---|---|---|---|---|
Add each direct comparison from your included studies by specifying the two treatments, the number of studies informing that comparison, and optionally the pooled effect size and standard error. Mark one treatment as the reference comparator (typically placebo or standard care).
The tool automatically generates an interactive force-directed network diagram using D3.js. Node sizes reflect the number of studies involving each treatment, and edge thickness represents the number of direct comparisons. Drag nodes to rearrange the layout.
Examine the generated league table showing all pairwise treatment comparisons. Cells with direct evidence are highlighted, while indirect-only comparisons are marked separately. This matrix provides a comprehensive overview of the evidence structure.
Review the network connectivity report to identify treatments connected only through single paths (vulnerable to transitivity violations). The tool flags disconnected subnetworks and indirect-only pairs that cannot be validated against direct evidence.
Verify that all treatments form a single connected network using the automated breadth-first search check. Examine the ratio of observed direct comparisons to total possible comparisons. Sparse networks require stronger transitivity assumptions.
Copy the generated R code for the netmeta package (frequentist NMA), export WinBUGS contrast-based format for Bayesian analysis, or download STATA arm-based format. The auto-generated methods paragraph describes your network geometry for the manuscript.
Need this done professionally? Get a complete systematic review or meta-analysis handled end-to-end.
Get a Free QuoteNetwork meta-analysis synthesizes both head-to-head trial data (direct evidence) and inferred comparisons through common comparators (indirect evidence). This produces relative effect estimates for all treatment pairs, even those never directly compared in any trial.
The validity of indirect comparisons depends on the transitivity assumption: patients across different comparisons must be sufficiently similar regarding effect modifiers. Violations (e.g., comparing mild-patient trials with severe-patient trials) can produce biased indirect estimates.
The league table is a matrix displaying estimated relative effects for every treatment pair. Diagonal cells represent each treatment against itself. Upper and lower triangles may show direct versus indirect estimates, facilitating quick identification of agreement or disagreement.
The network diagram provides an immediate visual summary of which comparisons are well-supported (thick edges, many studies) and which rely on sparse evidence. Node sizes indicate how much data supports each treatment, helping identify star-shaped versus well-connected networks.
Closed loops in the network (e.g., A vs B, B vs C, A vs C) allow statistical tests for inconsistency, where direct and indirect evidence disagree. Node-splitting and loop-based tests help localize problematic comparisons that violate the consistency assumption.
P-scores (frequentist) or SUCRA values (Bayesian) provide a numerical ranking of treatments from 0 to 1, where higher values indicate better performance. These rankings account for uncertainty and should be interpreted alongside the effect estimates and confidence intervals.
A network meta-analysis requires at least three treatments connected through trials to produce indirect evidence. With only two treatments, standard pairwise meta-analysis is sufficient. The value of network meta-analysis increases with the number of competing interventions.
Well-connected networks with multiple independent paths between treatments produce more precise indirect estimates and allow consistency checks. Star-shaped networks (all comparisons through a single hub) are more fragile because removing the hub disconnects the entire network.
Network meta-analysis (also called mixed treatment comparison) represents an evolution beyond traditional pairwise meta-analysis, enabling simultaneous comparison of multiple competing treatments within a single coherent framework. First formalized by Lumley (2002) and further developed by Lu and Ades (2004), network meta-analysis combines direct evidence from head-to-head trials with indirect comparison evidence derived through common comparators, producing a complete set of relative treatment effects for all pairs in the network. The validity of network meta-analysis depends on three core assumptions: transitivity (effect modifiers are distributed similarly across comparisons), consistency (direct and indirect evidence agree), and homogeneity within each comparison.
The network geometry provides critical information for evaluating feasibility. A well-connected network with multiple closed loops allows statistical tests for inconsistency (node-splitting, loop-based tests), while sparse tree-shaped networks cannot be tested for inconsistency at all. This tool computes the number of direct comparisons relative to possible comparisons, flags indirect-only pairs, and verifies overall connectivity. Researchers should present their network diagram prominently in the manuscript, as recommended by the PRISMA-NMA extension (Hutton et al., 2015). Visualize pairwise pooled effects from your network meta-analysis using our forest plot generator, and calculate individual study effect sizes with our effect size calculator.
The league table is the standard output format for presenting network meta-analysis results. It shows all pairwise relative effects in a matrix where each cell contains the pooled estimate and confidence interval for the comparison between the row and column treatments. Salanti (2012) recommends organizing the league table with treatments ranked by their P-score or SUCRA value so that the most effective treatment appears in the top-left position. This ranking accounts for both the magnitude of effect and the associated uncertainty, providing clinicians and guideline panels with an evidence-based treatment hierarchy.
Assessing consistency between direct and indirect evidence is a critical step that distinguishes rigorous network meta-analysis from naive pooling. The node-splitting approach (Dias et al., 2010) separates direct and indirect estimates for each comparison and tests whether they differ significantly. Global inconsistency tests (design-by-treatment interaction model, as described by Rucker and Schwarzer, 2015) evaluate whether the network as a whole exhibits systematic disagreement between evidence sources. When inconsistency is detected, researchers should investigate potential sources including differences in patient populations, outcome definitions, or intervention implementation across comparisons.
Treatment ranking through P-scores (frequentist, Rucker and Schwarzer, 2015) or SUCRA values (Bayesian) provides a probabilistic summary of how likely each treatment is to be the best option. However, rankings should always be interpreted alongside the effect estimates and their confidence intervals. A treatment may rank first with a P-score of 0.85 but still have wide confidence intervals that overlap substantially with second and third-ranked treatments. The Cochrane Handbook (Chapter 11) cautions against over-interpreting rank order without considering the precision of the underlying estimates.
This helper tool bridges the gap between study selection and formal statistical analysis. The generated R code uses the netmeta package (Rucker et al., 2023), which implements a frequentist graph-theoretical approach to network meta-analysis with built-in P-score rankings and inconsistency tests. For Bayesian analyses, export the WinBUGS contrast format or use the STATA format with the network suite of commands. Assess publication bias across your network comparisons using our funnel plot and publication bias tool, and explore potential effect modifiers with our meta-regression formatter.
When planning a network meta-analysis, consider the minimum requirements for a valid and informative analysis. At least three treatments must form a connected network, and ideally multiple independent paths should exist between key comparisons to allow consistency checking. Salanti (2012) and the Cochrane Handbook (Chapter 11) recommend pre-specifying the network structure, eligibility criteria, and statistical model before data extraction. For binary outcomes, ensure adequate event rates across comparisons. For continuous outcomes, verify that scales are comparable or convert to standardized mean differences using our effect size converter.
A network meta-analysis (NMA), also called a mixed treatment comparison, simultaneously compares multiple treatments using both direct and indirect evidence. Unlike traditional pairwise meta-analysis which compares only two treatments at a time, NMA synthesizes evidence from a connected network of trials to estimate relative effects between all treatment pairs, even those never directly compared in a head-to-head trial. NMA relies on the consistency assumption, meaning that direct and indirect evidence agree. This approach produces a comprehensive ranking of all treatments and is the gold standard for clinical decision-making when multiple competing interventions exist.
Network meta-analysis is appropriate when you have at least three treatments connected through a network of randomized controlled trials, and you want to compare all treatments simultaneously. The key conditions include: the treatments must form a connected network (every treatment reachable from every other through a chain of direct comparisons), the transitivity assumption must hold (study populations, outcomes, and settings are sufficiently similar across comparisons), and the consistency assumption should be plausible (direct and indirect evidence should not systematically disagree). NMA is particularly valuable for clinical guideline development, health technology assessment, and formulary decisions where multiple alternatives exist.
A network is connected when every treatment can be reached from every other treatment through a chain of direct comparisons. If the network is disconnected, some treatment pairs cannot be compared at all because there is no path (direct or indirect) linking them. This tool checks connectivity automatically. A well-connected network with multiple independent paths between treatments strengthens confidence in indirect estimates because they can be corroborated from different directions. Sparse networks with single connecting paths are more vulnerable to violations of the transitivity assumption.
The transitivity assumption states that patients enrolled in different trials within the network could, in principle, have been randomized to any of the treatments being compared. This means that effect modifiers (variables that change the relative treatment effect) should be distributed similarly across the different comparisons in the network. Violations occur when, for example, trials comparing Drug A vs Placebo enrolled mild patients while trials comparing Drug B vs Placebo enrolled severe patients. Assessing transitivity requires clinical expertise and careful examination of study characteristics across comparisons.
In a network diagram, each node (circle) represents a treatment, and each edge (line) represents a direct comparison between two treatments supported by at least one study. Node size typically reflects the total number of studies or participants involving that treatment, indicating how much evidence supports each node. Edge thickness reflects the number of studies informing that direct comparison. The reference treatment (often placebo or standard care) is highlighted with a different color. A well-designed NMA network has many connections, meaning most comparisons are supported by direct evidence. Sparse networks with few edges rely more heavily on indirect evidence.
The most common NMA software includes: R with the netmeta package (frequentist approach) or gemtc/bnma packages (Bayesian), WinBUGS/OpenBUGS (Bayesian with custom models), STATA with the network suite of commands, and CINeMA for confidence in network meta-analysis ratings. This helper tool formats your data for all major platforms so you can proceed directly to analysis. The R netmeta package is recommended for most users because it provides a complete frequentist framework with built-in network graphs, league tables, forest plots, P-score rankings, and inconsistency tests, all without requiring custom Bayesian model specification.
Visualize pairwise pooled effects from your network meta-analysis using our forest plot generator. Calculate individual study effect sizes before building your NMA network with our effect size calculator for SMD, OR, and RR. Assess publication bias across the comparisons in your network using our funnel plot and publication bias tool with Egger's test and trim-and-fill analysis.
Reviewed by
Dr. Sarah Mitchell holds a PhD in Biostatistics from Johns Hopkins Bloomberg School of Public Health and has over 15 years of experience in systematic review methodology and meta-analysis. She has authored or co-authored 40+ peer-reviewed publications in journals including the Journal of Clinical Epidemiology, BMC Medical Research Methodology, and Research Synthesis Methods. A former Cochrane Review Group statistician and current editorial board member of Systematic Reviews, Dr. Mitchell has supervised 200+ evidence synthesis projects across clinical medicine, public health, and social sciences. She reviews all Research Gold tools to ensure statistical accuracy and compliance with Cochrane Handbook and PRISMA 2020 standards.
Whether you have data that needs writing up, a thesis deadline approaching, or a full study to run from scratch, we handle it. Average turnaround: 2-4 weeks.