Understanding how to write a scoping review is essential for researchers who need to map the breadth and depth of available evidence on a topic before deciding whether a full systematic review is warranted. A scoping review identifies key concepts, clarifies definitions, examines the types and sources of evidence, and highlights gaps in the existing literature. It is not designed to answer a narrow clinical question or to pool effect sizes, it is designed to chart a research landscape.
The scoping review methodology has matured significantly since its formalization. What began as an informal approach to literature mapping now has a structured framework, dedicated reporting guidelines, and institutional endorsement from the Joanna Briggs Institute. Researchers in health sciences, education, social work, and environmental policy increasingly choose scoping reviews when their objective is exploration rather than intervention effectiveness. This scoping review guide walks through every stage of the process, from question formulation to final reporting.
What Is a Scoping Review?
The term was popularized by Arksey and O'Malley in their seminal 2005 paper, which distinguished scoping reviews from systematic reviews by their purpose and scope. Where a systematic review asks a focused question and appraises the quality of each included study, a scoping review asks a broader question and does not formally assess methodological rigor. This distinction is fundamental, it shapes every decision from question formulation to data presentation.
Scoping reviews are particularly valuable in four scenarios. First, when a research area is emerging and the evidence base is heterogeneous in terms of study designs, populations, and outcomes. Second, when the goal is to identify the types of available evidence before committing to a full systematic review. Third, when the aim is to clarify working definitions or conceptual boundaries. Fourth, when a research team wants to map the key factors related to a concept and identify how these factors are studied across different contexts.
The JBI Manual (Peters et al., 2020) provides the most authoritative contemporary guidance on scoping review conduct. It builds on the original Arksey and O'Malley framework and the enhancements proposed by Levac et al. (2010), creating a comprehensive methodology that aligns with current evidence synthesis standards. Researchers conducting scoping reviews in health and social sciences should treat the JBI Manual as their primary methodological reference.
It is worth noting what scoping reviews do not do. They do not assess risk of bias. They do not grade the certainty of evidence. They do not calculate pooled effect estimates. They do not produce clinical recommendations. If your research question requires any of these outputs, a systematic review, not a scoping review, is the appropriate method. For a detailed comparison, see our guide on scoping vs systematic review.
How to Write a Scoping Review, The 6-Stage Framework
The foundational methodology for conducting a scoping review comes from the Arksey and O'Malley framework, published in 2005. Their original paper proposed five mandatory stages and one optional stage. Levac et al. (2010) refined each stage with additional methodological recommendations, and the JBI Manual (Peters et al., 2020) formalized these into institutional guidance. Together, these three sources form the methodological backbone of modern scoping review practice.
The six scoping review steps are: (1) identifying the research question, (2) identifying relevant studies, (3) study selection, (4) charting the data, (5) collating, summarizing, and reporting the results, and (6) consultation with stakeholders. The first five stages are considered mandatory. The sixth, stakeholder consultation, was originally described as optional by Arksey and O'Malley but is now strongly recommended by Levac et al. and the JBI Manual.
Stage 1: Identifying the Research Question
The research question in a scoping review is deliberately broader than the focused question used in a systematic review. Where a systematic review might ask "Does intervention X improve outcome Y in population Z?", a scoping review asks "What is known about topic A in context B?" The breadth is intentional, it allows the review to capture a wide range of evidence types, study designs, and perspectives.
Formulating the question requires the PCC framework: Population, Concept, and Context. This replaces the PICO framework (Population, Intervention, Comparison, Outcome) used in systematic reviews. The PCC framework is broader by design, it does not require specifying an intervention, comparator, or measured outcome, because scoping reviews are not evaluating effectiveness.
A well-constructed PCC question defines three elements clearly. The Population identifies who is being studied, this could be patients, healthcare providers, students, policymakers, or any defined group. The Concept identifies the core phenomenon, intervention, or topic area under investigation. The Context identifies the setting, geographic location, cultural factors, or disciplinary boundaries that frame the review.
For example, a scoping review question using PCC might read: "What is the nature and extent of research on digital health literacy (Concept) among older adults aged 65 and above (Population) in primary care settings (Context)?" This question is broad enough to capture qualitative, quantitative, and mixed-methods studies across different countries and time periods, while remaining focused enough to produce a coherent synthesis.
Levac et al. (2010) recommended that the research question be developed iteratively. You may need to refine it after conducting a preliminary search. If the initial question retrieves an unmanageable volume of results, you can narrow the Concept or Context. If it retrieves too few, you can broaden them. This iterative approach distinguishes scoping review question development from the more rigid approach typical of systematic reviews. To structure your question with PCC/PICO, you can use our free framework generator tool.
Stage 2: Identifying Relevant Studies
The search strategy in a scoping review must be comprehensive, reproducible, and transparent, the same standards applied to systematic reviews. The scoping review uses PCC framework elements to construct a search strategy that captures all potentially relevant literature. A Boolean search strategy retrieves studies from databases by combining Population terms, Concept terms, and Context terms using AND/OR operators.
You should search a minimum of two electronic databases, though three to five is considered best practice. Common choices include PubMed/MEDLINE, CINAHL, PsycINFO, Scopus, Web of Science, and Embase. The specific databases depend on your discipline, education reviews might use ERIC, social work reviews might use Social Services Abstracts, and environmental reviews might use GreenFILE.
Grey literature is a critical component of scoping review searches. Because scoping reviews aim to map the breadth of evidence, they should include dissertations, conference proceedings, government reports, organizational white papers, and preprints. Sources like ProQuest Dissertations and Theses, OpenGrey, and relevant organizational websites should be searched systematically. Grey literature reduces publication bias and captures evidence that may not appear in indexed databases.
The JBI Manual recommends a three-step search strategy. First, conduct an initial limited search of at least two relevant databases to identify keywords contained in the titles and abstracts of relevant articles. Second, conduct a comprehensive search across all selected databases using all identified keywords and index terms. Third, search the reference lists of all included sources for additional relevant studies.
Document every aspect of your search: databases searched, date of search, full search strings, number of results per database, and any filters applied. This documentation is required for PRISMA-ScR reporting and ensures your search can be reproduced by other researchers. Working with a research librarian or information specialist to develop and validate your search strategy is strongly recommended.
Stage 3: Study Selection
Study selection in a scoping review follows the same two-stage screening process used in systematic reviews: title-and-abstract screening followed by full-text screening. Both stages apply predetermined eligibility criteria that flow directly from your PCC question.
Eligibility criteria should specify inclusion and exclusion parameters for each PCC element. For Population, define who is included (e.g., adults aged 65+) and who is excluded (e.g., studies exclusively about children). For Concept, define what counts as relevant (e.g., studies examining digital health literacy) and what does not (e.g., studies about general computer literacy without a health component). For Context, define the setting boundaries (e.g., primary care settings) and exclusions (e.g., hospital inpatient settings). You can use our eligibility criteria tool to structure these parameters systematically.
Levac et al. (2010) emphasized that at least two reviewers should independently screen titles and abstracts, with conflicts resolved through discussion or a third reviewer. This recommendation has been adopted by the JBI Manual and is now considered standard practice. Using reference management software such as Covidence, Rayyan, or EndNote streamlines the screening process and maintains an audit trail.
After title-and-abstract screening, retrieve the full texts of all potentially relevant studies. Apply the same eligibility criteria at full-text level, documenting reasons for exclusion at this stage. The reasons for exclusion are reported in the PRISMA-ScR flow diagram, which provides a transparent visual record of how studies moved through each screening phase.
One important distinction: scoping reviews typically cast a wider net than systematic reviews during screening. Because the goal is breadth rather than precision, borderline studies are more likely to be included than excluded. When in doubt, include the study and let the charting stage reveal whether it contributes meaningfully to the evidence map.
Stage 4: Charting the Data
Data charting is the scoping review equivalent of data extraction in a systematic review, though the approach differs in important ways. Data charting in scoping reviews uses a structured form aligned with PCC elements to systematically record information from each included source. The charting form captures descriptive information, who, what, where, when, and how, rather than outcome data or effect sizes.
A standard data charting form includes fields for: author(s), year of publication, country of origin, study design, population characteristics, concept definition used, context details, key findings relevant to the review question, and any additional variables specific to the review topic. The JBI Manual (Peters et al., 2020) provides a template that researchers can adapt to their specific review.
| Charting Field | Description | Example |
|---|---|---|
| Author, Year | Bibliographic details | Smith et al., 2023 |
| Country | Where the study was conducted | United Kingdom |
| Study Design | Type of study | Cross-sectional survey |
| Population | PCC-P: Who was studied | Adults aged 65-80 (n=450) |
| Concept | PCC-C: What was examined | Digital health literacy assessment |
| Context | PCC-C: Setting/environment | Urban primary care clinics |
| Key Findings |
Unlike systematic review data extraction, where the form is typically finalized before extraction begins, scoping review charting forms should evolve iteratively. Levac et al. (2010) recommended piloting the charting form on 3-5 studies, then refining categories based on what the data reveals. New variables may emerge as charting progresses, for example, you may discover that funding source is an important contextual variable you did not anticipate. The iterative nature of data charting is a defining feature of scoping review methodology.
Two reviewers should independently chart data from each included study. This reduces errors and ensures consistency. Any discrepancies should be resolved through discussion. For large scoping reviews with dozens or hundreds of included sources, dividing the charting workload between reviewers, with a subset charted by both for inter-rater reliability, is a practical compromise endorsed by the JBI Manual.
Stage 5: Collating, Summarizing, and Reporting Results
Stage 5 is where your charted data becomes a coherent evidence map. This stage involves three activities: collating the results into a structured format, summarizing the findings in relation to your research question, and reporting the outcomes using descriptive and visual methods.
Begin by producing a descriptive numerical summary of the included studies. Report the total number of studies included, a breakdown by year of publication (to show trends over time), a breakdown by country or region, a breakdown by study design, and distributions across your PCC elements. Tables and charts are effective for presenting these distributions, a stacked bar chart showing study designs by decade, for example, immediately communicates how research methods have evolved.
Next, provide a narrative summary organized thematically around your PCC elements or around themes that emerged during data charting. Unlike a systematic review, which synthesizes findings to answer a specific question, a scoping review presents findings descriptively. You are mapping the terrain, not drawing conclusions about intervention effectiveness. Describe what the evidence shows, where it clusters, and where gaps exist.
Evidence gaps should be explicitly identified and discussed. Where are the geographic gaps, is all the evidence from high-income countries? What populations are underrepresented? Which study designs dominate, and what alternative designs might strengthen the evidence base? Are there conceptual definitions that vary widely across studies? These gaps are often the most valuable output of a scoping review because they directly inform future research agendas.
Visual evidence maps, tables, charts, bubble plots, and geographic maps, are increasingly used to present scoping review results. A well-designed evidence map allows readers to grasp the distribution of evidence at a glance. For example, a matrix with Population subgroups on one axis and Concept dimensions on the other, with cell values indicating the number of studies, immediately reveals where evidence is concentrated and where it is sparse.
Stage 6: Consultation with Stakeholders
The sixth stage, stakeholder consultation, was included in Arksey and O'Malley's original framework as an optional step, but subsequent methodological guidance has elevated its importance. Levac et al. (2010) argued that consultation should be a required component, and the JBI Manual strongly recommends it.
Stakeholder consultation involves engaging with individuals who have knowledge or experience relevant to the review topic, clinicians, patients, policymakers, educators, or community members, to validate and contextualize your findings. Stakeholders may identify sources of evidence that your search strategy missed, offer interpretive insights that your charting did not capture, and suggest practical implications that emerge from lived experience rather than published research.
The consultation can take many forms: interviews, focus groups, surveys, advisory panel meetings, or informal discussions. The format should match your review's purpose and resources. Document the consultation process, including who was consulted, how they were recruited, what questions were asked, and how their input influenced the review findings.
In our scoping review work, the most common mistake is researchers treating the consultation stage as an afterthought, or skipping it entirely. When conducted thoughtfully, stakeholder consultation transforms a scoping review from a purely academic exercise into a document with practical relevance and grounded interpretation. It is the stage that connects published evidence to real-world context.