How to write a systematic review is the single most common question researchers face when asked to produce high-level evidence synthesis. A systematic review is not a literature review with extra steps. It is a distinct research method governed by the Cochrane Handbook (Higgins et al., 2023) and reported according to PRISMA 2020 guidelines (Page et al., 2021). This guide walks you through every phase of the systematic review process, from formulating your research question to submitting a publish-ready manuscript, with links to free tools you can use at each step.

Whether you are a PhD student conducting your first review for a dissertation chapter or a medical researcher synthesizing clinical trial evidence for guideline development, this step-by-step systematic review guide gives you the complete methodology. Every recommendation is grounded in the Cochrane Handbook and PRISMA 2020, the two authoritative sources that peer reviewers and thesis examiners will use to evaluate your work.

A systematic review is a rigorous, protocol-driven research method that identifies, appraises, and synthesizes all available evidence on a specific question. It follows PRISMA 2020 reporting guidelines (Page et al., 2021), uses pre-registered protocols on PROSPERO, and applies Cochrane Handbook methodology (Higgins et al., 2023) to minimize bias and maximize reproducibility.

What Is a Systematic Review?

A systematic review is an evidence synthesis method that uses explicit, reproducible methods to identify, select, appraise, and synthesize all relevant research on a defined question. It differs fundamentally from a narrative literature review in its commitment to methodological transparency and bias reduction.

Three characteristics distinguish a systematic review from other review types:

  1. Pre-registered protocol. The methodology is documented and registered before the review begins, typically on PROSPERO, locking decisions about eligibility criteria, search strategy, risk of bias tools, and synthesis methods. This prevents post-hoc outcome switching.

  2. Reproducible search strategy. The search uses structured Boolean search operators across multiple databases (PubMed, Embase, CINAHL, Cochrane Library, Web of Science), with the full electronic strategy reported as a supplementary appendix. Another researcher should be able to replicate your search and retrieve the same results.

  3. Standardized quality assessment. Every included study is assessed for risk of bias using validated tools, not subjective judgment. The Cochrane Handbook (Higgins et al., 2023) specifies which tool to use for each study design.

A systematic review is an evidence synthesis method, it synthesizes primary research rather than generating new data. It follows PRISMA 2020 reporting guidelines, which provide a 27-item checklist and a four-phase flow diagram that structure the manuscript. And it follows the methodology of the Cochrane Handbook, the definitive reference for how each phase should be conducted.

Understanding how SRs differ from literature reviews is critical before you begin. If your supervisor or funder expects a systematic review but you deliver a narrative review, the work will be rejected regardless of its quality. For an overview of the broader landscape, see our guide to types of evidence synthesis reviews.

Before You Start, Planning Your Systematic Review

Planning determines whether your review succeeds or stalls at screening. Before opening a single database, complete three preparatory tasks: formulate your question, assemble your team, and check for existing reviews.

Formulate Your Research Question Using PICO

A well-structured research question is the foundation of every systematic review decision, your eligibility criteria, search strategy, data extraction variables, and outcome measures all flow from it. The PICO framework structures research questions into four components:

PICO structures research questions by forcing specificity. A vague question like "What treatments work for diabetes?" is unsearchable. A PICO-structured question, "In adults with type 2 diabetes (P), do SGLT2 inhibitors (I) compared with placebo (C) reduce HbA1c levels and cardiovascular events (O)?", directly translates into search terms and eligibility criteria.

Use our free PICO framework builder to structure your research question with PICO. It generates your PICO table, suggests search terms, and formats the output for your protocol.

Assemble Your Review Team

A systematic review following Cochrane methodology requires a minimum of two independent reviewers for study screening, with inter-rater agreement measured by Cohen's kappa (Higgins et al., 2023). This is not optional, single-reviewer screening introduces selection bias that peer reviewers will identify and criticize.

Your minimum team:

RoleResponsibilityMinimum
Lead reviewerProtocol, search, screening, extraction, writing1
Second reviewerIndependent screening, extraction verification1
Subject expertClinical or domain guidance1 (can overlap with reviewer)
Information specialistSearch strategy developmentRecommended
StatisticianMeta-analysis (if applicable)As needed

If you are a PhD student working on a dissertation, your supervisor typically serves as the second reviewer. If your supervisor cannot commit to screening duties, consider enlisting a co-student or hiring a systematic review expert for specific phases. You can also calculate agreement statistics using our inter-rater reliability calculator.

Check for Existing Reviews

Before investing months of work, verify that your review does not already exist. Search these four sources:

  1. Cochrane Library, The most comprehensive collection of systematic reviews in health. If a Cochrane review on your topic exists and is recent (updated within 2-3 years), your review may be redundant unless you have a different scope or population.

  2. PROSPERO, Search the registry for ongoing or recently completed reviews on your topic. If a protocol is registered but not yet published, consider whether your review adds sufficient novelty or whether collaboration would be more productive.

  3. PubMed, Search with your PICO terms plus the filter "systematic review" in the publication type field.

  4. JBI Evidence Synthesis, The Joanna Briggs Institute maintains a registry of systematic review protocols and completed reviews, particularly in nursing and allied health.

Finding an existing review does not necessarily stop your project. Your review may cover a different population, include newer studies, use different eligibility criteria, or address a different outcome. But you must justify why a new review is needed, and this justification belongs in your protocol introduction.

Step 1, Write and Register Your Protocol

A systematic review requires protocol registration before screening begins. The protocol is the blueprint of your review, it documents every methodological decision so that readers (and peer reviewers) can verify that you followed your plan rather than making post-hoc adjustments based on results.

What Goes in the Protocol

Your systematic review protocol should document:

PRISMA-P for Protocol Reporting

PRISMA-P (Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols) is the checklist designed specifically for protocols. It contains 17 items covering administrative information, introduction, and methods. Do not confuse PRISMA-P with PRISMA 2020, they serve different stages. PRISMA-P guides your protocol; PRISMA 2020 guides your final manuscript.

For a comprehensive walkthrough of systematic review protocol development, read our SR protocol development guide.

Register on PROSPERO

PROSPERO is the international prospective register of systematic reviews. PROSPERO protocol registration is strongly recommended by Cochrane and required by many high-impact journals. Registration serves two purposes:

  1. Prevents duplication, Other researchers can see your ongoing review and avoid redundant work
  2. Prevents outcome switching, Your registered protocol locks your primary outcomes before you see the data, strengthening the credibility of your findings

Registration is free and typically takes 2-4 weeks for approval. The critical rule: register before screening begins. If you register after screening has started, PROSPERO flags your registration as retrospective, which weakens your methodological credibility.

Use our PROSPERO registration formatter to prepare your submission fields, or read our full PROSPERO registration guide for a detailed walkthrough.

Step 2, Develop Your Search Strategy

A comprehensive search strategy is the methodological backbone of your systematic review. Boolean search retrieves studies from electronic databases using structured combinations of terms, operators, and filters. A poorly constructed search either misses relevant studies (low sensitivity) or retrieves thousands of irrelevant records (low specificity). The goal is high sensitivity with manageable specificity.

Build Your Boolean Search String

Boolean search operators combine your PICO-derived terms into executable queries:

A standard search structure follows this pattern: (Population terms OR synonyms) AND (Intervention terms OR synonyms) AND (Outcome terms OR synonyms). Each concept block combines controlled vocabulary with free-text keywords.

Each concept block should include MeSH terms (Medical Subject Headings) for controlled vocabulary indexing plus free-text terms for natural-language coverage. MeSH terms capture studies that have been indexed under a specific concept even if the exact words do not appear in the title or abstract.

Use our free Boolean search string generator to build and validate your search string, then translate PubMed to Embase search syntax for cross-database execution.

The Cochrane Handbook (Higgins et al., 2023) recommends searching a minimum of two databases, but most published reviews search three to five. The core databases for health-related systematic reviews:

DatabaseCoverageControlled VocabularyBest For
PubMed / MEDLINE36M+ records, biomedicalMeSH termsClinical trials, biomedical research
Embase40M+ records, pharmaceuticalEmtree termsDrug studies, pharmacology, European literature
CINAHLNursing and allied healthCINAHL headingsNursing interventions, qualitative studies
Cochrane Library (CENTRAL)Controlled trials registerMeSH termsRCTs, Cochrane reviews
Web of ScienceMultidisciplinary, citation indexingNone (keyword-based)Cross-disciplinary reviews, citation tracking

Each database uses different controlled vocabulary and search syntax. A PubMed search cannot be copy-pasted into Embase, you must translate MeSH terms to Emtree equivalents and adapt the syntax. Our database search translator automates this translation.

Sensitivity vs. Specificity

Your search must balance two competing priorities:

For systematic reviews, the Cochrane Handbook prioritizes sensitivity over specificity. It is better to screen 5,000 records and miss nothing than to screen 500 records and miss three pivotal studies. Plan your screening capacity around a high-sensitivity search.

For detailed guidance on building your search strategy, including database-specific syntax examples and grey literature sources, read our dedicated search strategy guide.

Step 3, Screen and Select Studies

Dual-reviewer screening is the quality control mechanism that separates systematic reviews from narrative literature reviews. Two reviewers independently assess every record against your pre-specified study selection criteria, and disagreements are resolved through discussion or a third reviewer. Dual-reviewer screening reduces selection bias by ensuring that no single reviewer's judgment determines which studies enter the review.

Title and Abstract Screening

Title and abstract screening is the first filter. Each reviewer independently reads the title and abstract of every record retrieved by your search and classifies it as "include," "exclude," or "maybe." At this stage, apply a low threshold for inclusion, if there is any doubt, advance the record to full-text screening. It is safer to review a full text unnecessarily than to exclude a relevant study based on an ambiguous abstract.

Before beginning, calibrate your screening criteria by piloting on 50 abstracts. Both reviewers screen the same 50 records independently, then compare decisions. If agreement is low (Cohen's kappa below 0.61), discuss the eligibility criteria, clarify ambiguities, and pilot again until you reach substantial agreement.

Cohen's kappa measures inter-rater reliability, the degree of agreement beyond chance. Cochrane considers kappa values of 0.61-0.80 as "substantial" and 0.81-1.00 as "almost perfect." Record your kappa value; you will report it in your methods section. Calculate yours with our inter-rater reliability calculator.

Full-Text Screening

Records that pass title and abstract screening advance to full-text screening. Both reviewers read the full text of each remaining study and apply the complete eligibility criteria. At this stage, record the specific reason for every exclusion, PRISMA 2020 requires you to report the number of excluded full texts with reasons, categorized by exclusion criterion (Page et al., 2021).

Common reasons for full-text exclusion:

Create Your PRISMA Flow Diagram

A systematic review produces a PRISMA flow diagram, a four-phase visual that tracks the flow of records from identification through screening to inclusion. PRISMA 2020 requires authors to report against a 27-item checklist and include a four-phase flow diagram (Page et al., 2021). The diagram reports:

Use our free tool to create your PRISMA flowchart with automatic formatting that meets PRISMA 2020 specifications. You can also remove duplicate citations before screening begins.

Step 4, Extract Data from Included Studies

Data extraction is the process of systematically collecting study-level information from each included study into a standardized data extraction form. A data extraction form collects study-level data in a structured, reproducible format, ensuring that every reviewer captures the same variables in the same way.

What to Extract

Your extraction form should capture these categories:

Study characteristics:

Population characteristics:

Intervention and comparator details:

Outcome data:

Pilot Your Extraction Form

Before extracting data from all included studies, pilot the form on 3-5 studies of varying designs and quality. Piloting reveals ambiguities in your extraction categories, missing fields, and inconsistencies in how different reviewers interpret the same data point.

Both reviewers should independently extract data from the pilot studies, then compare. Discrepancies indicate fields that need clearer operational definitions. Revise the form based on pilot findings before proceeding.

Use our free tool to build your data extraction form with pre-populated fields aligned to Cochrane Handbook recommendations. For an in-depth walkthrough, read our guide on data extraction best practices.

Handle Missing Data

Primary studies frequently omit data you need for synthesis. Common scenarios:

Record all data transformations and imputation decisions transparently. These belong in your methods section and supplementary materials.

Step 5, Assess Risk of Bias

Risk of bias assessment evaluates the internal validity of each included study. It answers the question: how confident can we be that this study's results reflect the true effect rather than systematic error? The Cochrane Handbook (Higgins et al., 2023) requires risk of bias assessment for every included study, using validated tools matched to study design.

RoB 2 for Randomized Controlled Trials

RoB 2 (Risk of Bias 2 tool) assesses risk of bias in RCTs across five domains:

  1. Bias arising from the randomization process, Was the allocation sequence random? Was it concealed?
  2. Bias due to deviations from intended interventions, Were participants and personnel blinded? Were there protocol deviations?
  3. Bias due to missing outcome data, Was outcome data complete? Were dropouts balanced?
  4. Bias in measurement of the outcome, Was outcome assessment blinded? Were validated instruments used?
  5. Bias in selection of the reported result, Was the analysis plan pre-specified? Are all outcomes reported?

Each domain is rated "low risk," "some concerns," or "high risk." The overall judgment follows an algorithm: if any domain is high risk, the overall assessment is high risk. RoB 2 assesses risk of bias in RCTs using a structured, signaling-question approach that reduces subjectivity compared to older tools.

Try our free RoB 2 assessment tool to conduct your assessment with guided signaling questions and automated summary generation.

ROBINS-I for Non-Randomized Studies

ROBINS-I (Risk Of Bias In Non-randomized Studies of Interventions) assesses risk of bias in non-randomized studies, cohort studies, case-control studies, and controlled before-after studies. ROBINS-I assesses risk of bias in non-randomized studies across seven domains:

  1. Bias due to confounding
  2. Bias in selection of participants
  3. Bias in classification of interventions
  4. Bias due to deviations from intended interventions
  5. Bias due to missing data
  6. Bias in measurement of outcomes
  7. Bias in selection of the reported result

ROBINS-I uses the same "low," "moderate," "serious," and "critical" risk of bias terminology. A study rated "critical" in any domain should generally be excluded from quantitative synthesis or subjected to sensitivity analysis.

Newcastle-Ottawa Scale for Observational Studies

The Newcastle-Ottawa Scale (NOS) is a simpler quality assessment tool designed for cohort and case-control studies. NOS evaluates observational study quality across three categories: selection, comparability, and outcome (for cohort studies) or exposure (for case-control studies). Each study can receive a maximum of 9 stars.

The NOS is less granular than ROBINS-I but is widely accepted and faster to apply. It is particularly common in public health and epidemiology reviews. The JBI Checklist provides an alternative for specific study designs including qualitative research and prevalence studies.

Use our free NOS quality assessment calculator to score your observational studies.

Presenting Risk of Bias Results

Present your risk of bias results in two formats:

  1. Traffic light plot, A table showing each study's rating for each domain (green = low, yellow = some concerns, red = high risk)
  2. Summary bar chart, The proportion of studies rated low, some concerns, and high risk for each domain

Both figures should appear in your results section. Risk of bias ratings also feed into your GRADE assessment (Step 7) and inform sensitivity analyses in your synthesis (Step 6).

For a comprehensive walkthrough of all tools, read our complete risk of bias guide.

Step 6, Synthesize Your Findings

Synthesis is where you transform extracted data and risk of bias assessments into an answer to your research question. Two approaches exist: narrative synthesis when statistical pooling is not appropriate, and quantitative synthesis (meta-analysis) when studies are sufficiently homogeneous.

Narrative Synthesis

Narrative synthesis is the descriptive approach used when meta-analysis is not feasible or not appropriate. You organize findings thematically, describe patterns across studies, and interpret results in light of risk of bias assessments. Narrative synthesis is not "no analysis", it requires structured methods:

A systematic review may include meta-analysis for some outcomes and narrative synthesis for others. The decision depends on clinical and methodological heterogeneity across studies for each specific outcome.

Quantitative Synthesis (Meta-Analysis)

Meta-analysis is the statistical combination of results from two or more independent studies to produce a pooled effect size estimate. A systematic review may include meta-analysis when studies are sufficiently similar in design, population, intervention, and outcome measurement. Meta-analysis produces a forest plot that visualizes individual study effects and the pooled estimate with confidence intervals.

Key decisions in meta-analysis:

Meta-analysis produces a forest plot that displays each study's point estimate and confidence interval alongside the pooled diamond. Use our free tool to create a forest plot online. For a complete statistical guide, read our complete meta-analysis guide.

Step 7, Rate Certainty of Evidence with GRADE

The GRADE framework (Grading of Recommendations, Assessment, Development, and Evaluation) rates the certainty of evidence for each outcome across the body of included studies. GRADE assesses certainty of evidence, it answers the question: how confident are we that the true effect lies close to the estimated effect?

The Five GRADE Domains

GRADE evaluates evidence across five domains, starting from "high" certainty for RCTs and "low" for observational studies:

DomainWhat It AssessesRating Down Criteria
Risk of biasInternal validity of included studiesMost studies at high or serious risk of bias
InconsistencyVariability in results across studiesHigh I-squared, wide prediction intervals, different directions of effect
IndirectnessHow well the evidence matches your PICODifferences in population, intervention, comparator, or outcome between studies and your question
ImprecisionPrecision of the pooled estimateWide confidence intervals crossing the null or the minimal clinically important difference
Publication biasLikelihood that studies are missingAsymmetric funnel plot, Egger's test significant, small-study effects

Each domain can rate the evidence down by one or two levels. The final certainty rating for each outcome is one of four levels:

Summary of Findings Table

GRADE produces a Summary of Findings table, the single most important output of a GRADE assessment. This table presents, for each outcome:

The Summary of Findings table belongs in your results section and is increasingly required by Cochrane and high-impact journals. It provides decision-makers with a concise, standardized summary of what the evidence shows and how confident we should be in those findings.

Use our free GRADE evidence certainty tool to walk through the five domains and generate your certainty ratings. For a full methodology guide, read our GRADE framework guide.

Step 8, Write and Report Using PRISMA 2020

Your systematic review manuscript must follow PRISMA 2020 compliant reporting standards. PRISMA 2020 requires authors to report against a 27-item checklist and include a four-phase flow diagram (Page et al., 2021). This is not optional for any journal that endorses PRISMA, and most high-impact journals in medicine, public health, and social sciences do.

The 27-Item PRISMA 2020 Checklist

The checklist covers every section of your manuscript:

Title and abstract:

Introduction:

Methods:

Results:

Discussion:

Other:

Manuscript Structure

A PRISMA 2020 compliant manuscript follows a predictable structure:

Title: Include "systematic review" and, if applicable, "meta-analysis." Your title should convey the scope: population, intervention, and outcome. Example: "SGLT2 Inhibitors for Cardiovascular Risk Reduction in Type 2 Diabetes: A Systematic Review and Meta-Analysis."

Abstract: Use a structured format with subheadings: Background, Objectives, Methods (data sources, study eligibility, data extraction, synthesis), Results (number of studies, key findings, certainty of evidence), Conclusions. Many journals impose a 250-350 word limit.

Methods: This is the most scrutinized section. Report your protocol registration number, full eligibility criteria, search databases and date of last search, screening process and agreement statistics, data extraction procedures, risk of bias tool and rationale, and synthesis methods (narrative and/or statistical). Reference your full electronic search strategy in a supplementary appendix.

Results: Follow the PRISMA flow diagram structure. Report the number of records identified, screened, assessed, and included. Present study characteristics in a summary table. Report risk of bias results with traffic light and summary plots. Present synthesis results, forest plots for meta-analyses, structured tables for narrative synthesis. Report GRADE certainty ratings in a Summary of Findings table.

Discussion: Summarize the main findings in the context of existing evidence. Discuss limitations at both the study level (risk of bias in included studies) and the review level (limitations of your own methodology). State implications for practice and research.

For complete reporting guidance, read our detailed guide on PRISMA 2020 reporting guidelines.

Common Systematic Review Mistakes to Avoid

In our experience guiding researchers through 500+ systematic reviews, these are the errors that most frequently derail projects or trigger peer-review rejection:

1. Research question too broad. "What is the effectiveness of physiotherapy?" is a question that could include thousands of studies across hundreds of conditions. Narrow with PICO until your expected yield is manageable (typically 500-5,000 records from the initial search).

2. No protocol before screening. Starting screening without a registered protocol is the most common methodological shortcut, and the most damaging. Without a locked protocol, peer reviewers cannot verify that your eligibility criteria, outcomes, and synthesis methods were pre-specified rather than chosen after seeing the data.

3. Single-reviewer screening. Using one reviewer for title/abstract or full-text screening violates Cochrane requirements and introduces uncontrolled selection bias. Even if time is limited, at minimum have a second reviewer independently screen a random 20% sample to calculate kappa.

4. Wrong risk of bias tool. Using the Newcastle-Ottawa Scale for RCTs or RoB 2 for observational studies is a methodological error that peer reviewers will catch immediately. Match the tool to the study design:

Study DesignCorrect Tool
Randomized controlled trialRoB 2
Non-randomized intervention studyROBINS-I
Cohort or case-controlNewcastle-Ottawa Scale
Qualitative studyJBI Checklist
Diagnostic accuracyQUADAS-2
Prevalence studyJBI Prevalence Checklist

5. Reporting only statistically significant findings. PRISMA 2020 and the Cochrane Handbook require transparent reporting of all pre-specified outcomes, including those with null or negative results. Selective reporting undermines the entire purpose of a systematic review, which is to provide an unbiased summary of the evidence.

6. Ignoring heterogeneity. Pooling results in a meta-analysis without assessing and reporting heterogeneity (I-squared, Q statistic) is a statistical error. If heterogeneity is substantial (I-squared above 50%), you must explore sources through pre-specified subgroup analyses before presenting the pooled estimate.

7. No GRADE assessment. Presenting a forest plot without rating the certainty of evidence tells readers the average effect but not how much confidence to place in it. GRADE is increasingly required by Cochrane, BMJ, Lancet, and other top journals.

How Long Does a Systematic Review Take?

The median time from registration to publication for systematic reviews is 67 weeks (Borah et al., 2017). This figure reflects the in-house academic pathway where researchers conduct every phase themselves alongside teaching, clinical duties, and other research commitments.

Timeline Comparison

PhaseIn-House (Academic)With Professional Support
Planning and PICO2-4 weeks1-2 days
Protocol and PROSPERO4-8 weeks3-7 days
Search strategy2-6 weeks2-5 days
Screening4-12 weeks1-2 weeks
Data extraction4-8 weeks1-2 weeks
Risk of bias2-4 weeks3-5 days
Synthesis and meta-analysis4-8 weeks1-2 weeks
Writing and PRISMA reporting4-12 weeks1-2 weeks
Total26-62 weeks5-8 weeks

The systematic review timeline varies dramatically based on three factors:

  1. Number of records to screen. A search yielding 500 records is manageable for two reviewers in a few days. A search yielding 10,000 records requires weeks of screening time or specialized software support.

  2. Number of included studies. Extracting data from 8 studies is fundamentally different from extracting data from 80 studies. Each additional study adds approximately 1-3 hours of extraction and risk of bias assessment time.

  3. Meta-analysis complexity. A simple two-arm meta-analysis with one outcome takes hours. A network meta-analysis with multiple treatment comparisons and subgroup analyses takes weeks.

For PhD students working on dissertations, the typical timeline is 6-18 months for a single systematic review chapter. If your timeline is tighter, professional support for specific phases, particularly the search strategy and data extraction, can compress the schedule significantly.

Understanding how to write a systematic review for a dissertation requires particular attention to institutional requirements, ethics committee approvals, and supervisor expectations that do not apply to independent research.

When to Get Professional Systematic Review Help

A systematic review has eight phases, each requiring specialized skills. Most researchers have deep expertise in some phases and limited experience in others. Systematic review help is most commonly needed at these stages:

Search strategy development. Building a comprehensive, reproducible search across multiple databases requires information science expertise, knowledge of controlled vocabularies (MeSH, Emtree, CINAHL headings), proximity operators, and database-specific syntax. This is the phase where research librarians and professional services add the most value.

Statistical analysis. Meta-analysis requires statistical expertise in pooling effect sizes, assessing heterogeneity, conducting subgroup analyses, and interpreting forest plots. If your team lacks a statistician, outsourcing this phase is more efficient than learning R or Stata from scratch.

Risk of bias assessment. Applying RoB 2 or ROBINS-I requires training in the signaling-question framework. Incorrect application leads to inaccurate quality ratings that propagate through your GRADE assessment and conclusions.

PRISMA 2020 reporting. Structuring a manuscript to meet all 27 PRISMA 2020 items is a reporting skill distinct from scientific writing. Professional services experienced in systematic review publication can format your manuscript to journal standards.

Screening at scale. When your search retrieves 5,000+ records, the screening workload for two reviewers becomes weeks of full-time work. Professional services with trained screeners can complete high-volume screening faster while maintaining dual-reviewer standards.

A systematic review expert can handle the entire pipeline from protocol to publication, or support only the phases where you need help. This modular approach lets you retain ownership of the intellectual work while delegating the methodological execution. Read our guide on what a professional SR service includes to understand the scope of support available, or explore Research Gold's research support for a full breakdown of our service tiers.