Researchers, clinicians, and policymakers rely on types of systematic reviews to make decisions grounded in the best available evidence. But "systematic review" is not a single method. It is a family of approaches, each designed for different research questions, timelines, and evidence landscapes. Choosing the wrong type wastes months of effort and produces findings that do not answer the question you actually need answered.

This guide compares eight evidence synthesis methods side by side. We cover what each type does, when to use it, how it differs from the others, and which reporting guidelines apply. Whether you are planning your first review or deciding between a scoping review and a traditional systematic review, this comparison will help you choose the right approach. For a detailed walkthrough of the most common type, see our step-by-step SR methodology walkthrough.

Overview of Types of Systematic Reviews

The term evidence synthesis refers to any method that systematically identifies, appraises, and combines findings from multiple studies to answer a research question. Within that umbrella, eight distinct types dominate the literature. Each occupies a specific position in the evidence hierarchy, and each serves a different purpose.

TypePurposeTimelineQuality AssessmentQuantitative Pooling
Traditional Systematic ReviewAnswer a specific clinical/research question6-18 monthsYes (mandatory)Optional
Meta-AnalysisStatistically pool effect sizes from SRs6-18 monthsYes (mandatory)Yes (defining feature)
Scoping ReviewMap evidence breadth on a broad topic3-8 monthsNo (typically)No
Rapid ReviewProvide timely evidence for urgent decisions2-6 weeksAbbreviatedOptional
Umbrella ReviewSynthesize existing systematic reviews3-6 monthsYes (AMSTAR 2)Optional
Narrative ReviewExpert summary of a focused topic1-3 monthsNoNo
Living Systematic ReviewContinuously updated SROngoingYesYes (updated)
Network Meta-AnalysisCompare multiple interventions simultaneously6-12 monthsYesYes (network)

These eight types are not interchangeable. A Systematic Review is an evidence synthesis method that follows a pre-defined protocol to locate, appraise, and synthesize all relevant studies on a specific question. A Scoping Review maps the available evidence on a broad topic without answering a narrow clinical question. A Meta-Analysis is a component of a systematic review that statistically pools effect sizes. Understanding these distinctions is essential before you commit resources to a project.

At Research Gold, we deliver systematic reviews, meta-analyses, and scoping reviews. Each type requires different expertise, different timelines, and different reporting standards. The sections that follow explain each method in detail so you can determine which one fits your research goals. For a broader look at our evidence synthesis services overview, that guide covers what to expect from working with a professional team.

Traditional Systematic Review with Meta-Analysis

The traditional systematic review is the gold standard of evidence synthesis. It answers a focused research question by systematically searching the literature, applying pre-defined eligibility criteria, assessing methodological quality, and synthesizing findings according to a transparent, reproducible protocol. When quantitative pooling of effect sizes is included, it becomes a systematic review with meta-analysis.

Defining Characteristics

A traditional SR follows the methodology outlined in the Cochrane Handbook (Higgins et al., 2023). The process begins with a structured research question, typically framed using the PICO framework: Population, Intervention, Comparison, Outcome. The protocol is registered on PROSPERO before searches begin, ensuring transparency and reducing the risk of post hoc modifications.

The search strategy must be comprehensive. It covers multiple databases (MEDLINE, Embase, CINAHL, and others relevant to the topic), grey literature sources, trial registries, and reference lists of included studies. Two independent reviewers screen titles, abstracts, and full texts against pre-defined inclusion and exclusion criteria. Disagreements are resolved by a third reviewer or through consensus.

Risk of bias assessment is mandatory. Tools vary by study design: the Cochrane Risk of Bias tool (RoB 2) for randomized controlled trials, the Newcastle-Ottawa Scale or ROBINS-I for observational studies. Every included study receives a quality rating, and the influence of study quality on overall findings is assessed through sensitivity analyses.

When Meta-Analysis Applies

A meta-analysis is not a separate review type but rather a statistical component that can be embedded within a systematic review. It uses statistical methods to combine effect sizes from individual studies into a single pooled estimate, weighted by study precision. The result is a forest plot showing each study's contribution to the overall effect.

Meta-analysis is appropriate only when the included studies are sufficiently similar in design, population, intervention, and outcome measurement. When clinical or methodological heterogeneity is too great, a narrative synthesis replaces the quantitative pooling. For a detailed walkthrough of the statistical methods involved, see our complete meta-analysis guide.

Reporting Standard

Traditional systematic reviews with meta-analysis report according to PRISMA 2020 (Page et al., 2021). The PRISMA checklist contains 27 items covering title, abstract, introduction, methods, results, discussion, and funding. A SR follows PRISMA 2020 reporting guidelines to ensure that readers can assess the transparency and completeness of the review. The PRISMA flow diagram documents the screening process from initial database hits to final included studies. You can generate one using our free PRISMA flow diagram generator.

Strengths and Limitations

The traditional SR with meta-analysis produces the most rigorous and defensible evidence synthesis. It minimizes bias through systematic methods, and the pooled effect estimate from a meta-analysis provides a precise answer to a specific question. However, it is also the most resource-intensive approach. A typical SR takes 6 to 18 months, involves a team of at least two reviewers plus a statistician, and requires access to multiple bibliographic databases and full-text articles. For many research teams, the timeline alone makes it impractical for time-sensitive decisions.

Scoping Review

A scoping review is a type of evidence synthesis designed to map the breadth and nature of evidence available on a broad topic. Unlike a traditional systematic review, it does not aim to answer a specific clinical question or assess the quality of individual studies. Instead, it identifies key concepts, evidence gaps, and the types of research that have been conducted in a particular area.

Framework and Purpose

Scoping reviews were formalized by Arksey and O'Malley (2005) and later refined by the JBI Manual (Peters et al., 2020). The research question is framed using the PCC framework: Population, Concept, Context. This broader framing reflects the exploratory nature of the scoping review, which asks "What is known about this topic?" rather than "What is the effect of this intervention?"

A scoping review maps available evidence across a domain. It identifies what types of studies exist, what populations have been studied, what outcomes have been measured, and where the gaps lie. This makes scoping reviews particularly valuable as precursors to systematic reviews: if you are unsure whether enough evidence exists to warrant a full SR, a scoping review answers that question first.

How It Differs from a Systematic Review

The distinction between scoping reviews and systematic reviews is one of the most commonly misunderstood points in evidence synthesis methodology. For a deeper comparison, see how scoping reviews differ from SRs. The key differences are summarized here.

FeatureSystematic ReviewScoping Review
Research questionNarrow, specific (PICO)Broad, exploratory (PCC)
Quality assessmentMandatoryNot required
Quantitative poolingYes (when appropriate)No
Protocol registrationPROSPEROPROSPERO (since 2023) or OSF
Reporting guidelinePRISMA 2020PRISMA-ScR
PurposeAnswer a questionMap a field
Eligibility criteriaStrict, pre-definedIterative, may evolve

Reporting Standard

A scoping review reports using PRISMA-ScR (Tricco et al., 2018). This extension of PRISMA includes 20 essential reporting items and two optional items specific to scoping reviews. PRISMA-ScR ensures transparency in how the review was conducted, what sources were searched, and how results were charted.

When to Choose a Scoping Review

Choose a scoping review when your research aim is to understand the landscape of evidence rather than to estimate an effect size. Common scenarios include: exploring a new or emerging research area, identifying evidence gaps to inform future primary studies, mapping the types and sources of evidence available on a policy-relevant topic, and clarifying key concepts or definitions used across a field.

Scoping reviews are not appropriate when your goal is to inform clinical practice guidelines, assess the effectiveness of an intervention, or produce a pooled effect estimate. For those purposes, a traditional systematic review with meta-analysis remains the correct choice.

Rapid Review

A rapid review is a streamlined form of evidence synthesis that uses abbreviated systematic review methods to produce actionable findings within a compressed timeline. Where a traditional SR takes 6 to 18 months, a rapid review typically takes 2 to 6 weeks.

Why Rapid Reviews Exist

Rapid reviews emerged from the reality that decision-makers cannot always wait for a full systematic review. During public health emergencies, policy development cycles, or clinical guideline updates, the question is not whether perfect evidence exists but whether the best available evidence can be synthesized fast enough to inform a decision.

The COVID-19 pandemic accelerated the acceptance of rapid reviews as a legitimate evidence synthesis method. Organizations including the World Health Organization, the Cochrane Rapid Reviews Methods Group, and national health technology assessment agencies produced hundreds of rapid reviews to guide clinical and policy decisions in real time.

Methodological Shortcuts

Rapid reviews achieve their speed by abbreviating one or more steps of the traditional SR process. Common shortcuts include searching fewer databases, limiting the date range or language of included studies, using a single reviewer for screening (with verification of a sample by a second reviewer), using abbreviated quality assessment tools, and conducting narrative synthesis rather than meta-analysis.

These shortcuts introduce known limitations. The trade-off is explicit: a rapid review sacrifices comprehensiveness for timeliness. The key requirement is transparency about which methodological shortcuts were taken and how they may affect the findings. A well-conducted rapid review with transparent limitations is more useful to a decision-maker than no evidence synthesis at all.

SR StepTraditional SRRapid Review Approach
ProtocolFull, registered on PROSPEROBrief, may not be registered
Search5+ databases, grey literature, hand-searching2-3 databases, limited grey literature
ScreeningDual independentSingle reviewer + spot-check
Quality assessmentFull tool (RoB 2, NOS)Abbreviated or checklist
SynthesisNarrative + meta-analysisNarrative only
Timeline6-18 months2-6 weeks

When to Choose a Rapid Review

A rapid review is appropriate when a decision cannot wait for a full systematic review, when preliminary evidence is needed to justify a larger review, or when an existing SR needs to be updated quickly. It is not appropriate when the findings will directly inform clinical practice guidelines that require the highest level of evidence rigor.

Umbrella Review, Network MA, and Other Types

Beyond the three most common types, several additional evidence synthesis methods serve specialized purposes. Each occupies a distinct niche in the evidence hierarchy.

Umbrella Review

An umbrella review (also called a review of reviews or overview of reviews) synthesizes findings from multiple existing systematic reviews on the same broad topic. It sits at the top of the evidence hierarchy because it aggregates already-synthesized evidence.

The primary tool for assessing the quality of included systematic reviews within an umbrella review is AMSTAR 2 (A MeaSurement Tool to Assess systematic Reviews). Each included SR receives a quality rating, and the umbrella review evaluates whether the conclusions of individual SRs are consistent or contradictory.

Umbrella reviews are particularly valuable when multiple systematic reviews on the same topic have reached different conclusions. By comparing the methodological quality and scope of each SR, the umbrella review can identify which findings are most trustworthy and which are undermined by methodological limitations. This is why umbrella reviews provide the highest level of evidence aggregation in the evidence synthesis landscape.

Network Meta-Analysis

A network meta-analysis (NMA) extends traditional meta-analysis by comparing three or more interventions simultaneously, even when those interventions have not been directly compared in head-to-head trials. A network meta-analysis compares multiple interventions by combining direct evidence (from trials that directly compared two treatments) with indirect evidence (inferred through a common comparator).

For example, if Trial A compares Drug X to placebo and Trial B compares Drug Y to placebo, an NMA can estimate the relative effectiveness of Drug X versus Drug Y, even though no trial directly compared them. The result is a ranking of interventions from most to least effective, presented as a league table or ranking probability plot. For a deeper exploration, see our network meta-analysis overview.

NMA is used extensively in health technology assessment to inform formulary decisions, clinical guidelines, and coverage determinations. It requires specialized statistical expertise (typically Bayesian methods in R or WinBUGS) and careful assessment of the transitivity assumption: the assumption that the studies in the network are sufficiently similar to make indirect comparisons valid.

Narrative Review

A narrative review (sometimes called a traditional review or expert review) is a non-systematic summary of the literature on a focused topic. Unlike a systematic review, a narrative review does not follow a pre-defined protocol, does not conduct a comprehensive search, and does not formally assess study quality. The author selects studies based on their expertise and presents a synthesized overview of the evidence.

Narrative reviews serve an important function in medical journals. They provide accessible overviews of complex topics, written by subject matter experts who can contextualize findings and identify emerging trends. However, because the study selection process is not transparent or reproducible, narrative reviews are susceptible to selection bias. They sit below systematic reviews in the evidence hierarchy and are not appropriate for informing clinical practice guidelines.

The distinction between a narrative review and a general literature review is subtle. Both are non-systematic. However, narrative reviews published in medical journals are typically commissioned, written by recognized experts, and focused on a specific clinical or methodological question. A general literature review, as written in a graduate thesis, tends to be broader in scope and less focused in its synthesis.

Living Systematic Review

A living systematic review is a traditional systematic review that is continually updated as new evidence becomes available. Rather than publishing once and becoming outdated, a living SR incorporates new studies at regular intervals (monthly, quarterly, or as they appear), re-runs meta-analyses, and updates its conclusions accordingly.

Living systematic reviews are particularly valuable in rapidly evolving fields where the evidence base changes frequently. The Cochrane Collaboration has pioneered living SRs in areas such as COVID-19 treatments, where new trial results appeared weekly and clinical guidelines needed to reflect the latest evidence.

The methodology follows the same rigorous standards as a traditional SR, with the added requirement of ongoing surveillance, updated searches, and transparent version control. The PRISMA 2020 guidelines apply, supplemented by specific guidance for living reviews published by the Cochrane Collaboration.

The practical challenge of living SRs is sustainability. They require a dedicated team willing to maintain the review indefinitely, update the search strategy as new databases or indexing terms emerge, and re-assess the quality of the evidence base with each iteration. For most research teams, this makes living SRs feasible only with institutional support or grant funding.

How to Choose the Right Type of Systematic Review

Selecting the right evidence synthesis method is the most consequential decision you make before starting a review. The wrong choice leads to wasted resources, inappropriate conclusions, and findings that do not serve your intended audience. The decision depends on four factors: your research question, your timeline, the available evidence, and the context in which your findings will be used.

Decision Framework

If your goal is to...Use this typeFrameworkTimeline
Answer a specific clinical questionTraditional SR + MAPICO6-18 months
Map evidence on a broad topicScoping ReviewPCC3-8 months
Inform an urgent policy decisionRapid ReviewModified PICO2-6 weeks
Synthesize multiple existing SRsUmbrella ReviewVaries3-6 months
Compare 3+ interventions without direct trialsNetwork MAPICO + network6-12 months
Provide an expert overview of a topicNarrative ReviewNone required1-3 months
Maintain current evidence in a fast-moving fieldLiving SRPICOOngoing
Pool quantitative effect sizesMeta-Analysis (within SR)PICO6-18 months

Question Type Determines Method

The structure of your research question is the strongest predictor of the appropriate review type. If you can articulate a focused question using PICO (Population, Intervention, Comparator, Outcome), you need a systematic review. If your question is broad and exploratory, framed using PCC (Population, Concept, Context), a scoping review is more appropriate.

Consider these examples. "Does cognitive behavioral therapy reduce anxiety symptoms in adults with generalized anxiety disorder compared to waitlist control?" is a PICO question that calls for a systematic review with meta-analysis. "What is known about digital mental health interventions for young adults?" is a PCC question that calls for a scoping review.

Timeline Constraints

If your decision cannot wait 6 to 18 months, a traditional SR is not feasible. A rapid review provides the best available evidence within 2 to 6 weeks. If you need to keep the evidence current beyond the initial publication, a living systematic review is the appropriate choice, though it requires ongoing resources.

Evidence Landscape

The type and volume of existing evidence also influences your choice. If multiple systematic reviews already exist on your topic, an umbrella review avoids duplicating effort and instead synthesizes the existing synthesis. If the evidence base is sparse or heterogeneous, a scoping review helps you understand what exists before committing to a full SR.

If the clinical question involves multiple competing interventions and no single trial compares them all, a network meta-analysis fills that gap. If you are writing for a clinical audience and need an accessible summary rather than a rigorous synthesis, a narrative review may be appropriate, provided you acknowledge its limitations.

Registration and Protocol

For traditional systematic reviews and scoping reviews focused on health topics, PROSPERO registration is strongly recommended and increasingly required by journals. PROSPERO registration demonstrates transparency and reduces the risk of duplicate reviews. Network meta-analyses should also be registered. Rapid reviews and narrative reviews are not typically registered, though documenting a brief protocol is always good practice.

Common Misconceptions

Several persistent misconceptions about types of systematic reviews lead researchers to choose the wrong method or misinterpret their findings.

Misconception 1: A meta-analysis is a type of review. Meta-analysis is a statistical technique, not a review type. It is a component that can be included within a systematic review when quantitative pooling is appropriate. A MA is a component of a SR. You can conduct a systematic review without a meta-analysis (narrative synthesis), but you should not conduct a meta-analysis without a systematic review (because the study selection would not be reproducible or comprehensive).

Misconception 2: Scoping reviews are easier than systematic reviews. Scoping reviews involve the same rigorous search and screening processes as systematic reviews. The difference is in scope and purpose, not in effort. A scoping review on a broad topic may actually require more screening than a narrowly focused SR because the eligibility criteria are broader and the volume of potentially relevant literature is larger.

Misconception 3: Rapid reviews are low-quality systematic reviews. A rapid review is not a poorly conducted SR. It is a distinct methodology with explicit, transparent trade-offs between rigor and timeliness. The quality of a rapid review depends on how transparently the methodological shortcuts are reported and whether the conclusions appropriately reflect those limitations.

Misconception 4: Narrative reviews have no place in evidence-based practice. Narrative reviews serve a valuable function as educational resources and expert syntheses. They provide context, identify emerging trends, and make complex evidence accessible to clinicians. The limitation is not that narrative reviews are useless but that they should not be the sole basis for clinical practice guidelines because their study selection is not transparent or reproducible.

Misconception 5: You need a meta-analysis to publish a systematic review. Many high-quality systematic reviews are published with narrative synthesis only. When included studies are too heterogeneous for quantitative pooling, a well-conducted narrative synthesis with structured tables and sensitivity analyses is both appropriate and publishable. Forcing a meta-analysis on incompatible data produces misleading results.

Misconception 6: An umbrella review replaces conducting a new SR. An umbrella review synthesizes existing SRs, but it can only be as current and rigorous as the reviews it includes. If the underlying SRs are outdated, poorly conducted, or inconsistent in scope, the umbrella review inherits those limitations. An umbrella review answers "What do existing SRs say?" not "What does the primary evidence say?"

Understanding these distinctions is not academic pedantry. Choosing the wrong review type leads to findings that do not answer the question decision-makers need answered, wastes the time and resources of the review team, and may produce conclusions that mislead rather than inform.

For teams that need help determining the right approach, affordable research support pricing is available for every review type covered in this guide. Whether you are planning a traditional SR, a scoping review, or a network meta-analysis, our methodologists can guide you through the process from protocol development to publication. See how we can help or reach out for a quote.