Understanding how to write a scoping review is essential for researchers who need to map the breadth and depth of available evidence on a topic before deciding whether a full systematic review is warranted. A scoping review identifies key concepts, clarifies definitions, examines the types and sources of evidence, and highlights gaps in the existing literature. It is not designed to answer a narrow clinical question or to pool effect sizes, it is designed to chart a research landscape.
The scoping review methodology has matured significantly since its formalization. What began as an informal approach to literature mapping now has a structured framework, dedicated reporting guidelines, and institutional endorsement from the Joanna Briggs Institute. Researchers in health sciences, education, social work, and environmental policy increasingly choose scoping reviews when their objective is exploration rather than intervention effectiveness. This scoping review guide walks through every stage of the process, from question formulation to final reporting.
What Is a Scoping Review?
A scoping review is a type of evidence synthesis that systematically maps the available literature on a broad topic or research area. The scoping review maps available evidence to identify the volume and nature of research, clarify key concepts and definitions, examine how research is conducted in a particular field, and identify knowledge gaps that may warrant further investigation.
The term was popularized by Arksey and O'Malley in their seminal 2005 paper, which distinguished scoping reviews from systematic reviews by their purpose and scope. Where a systematic review asks a focused question and appraises the quality of each included study, a scoping review asks a broader question and does not formally assess methodological rigor. This distinction is fundamental, it shapes every decision from question formulation to data presentation.
Scoping reviews are particularly valuable in four scenarios. First, when a research area is emerging and the evidence base is heterogeneous in terms of study designs, populations, and outcomes. Second, when the goal is to identify the types of available evidence before committing to a full systematic review. Third, when the aim is to clarify working definitions or conceptual boundaries. Fourth, when a research team wants to map the key factors related to a concept and identify how these factors are studied across different contexts.
The JBI Manual (Peters et al., 2020) provides the most authoritative contemporary guidance on scoping review conduct. It builds on the original Arksey and O'Malley framework and the enhancements proposed by Levac et al. (2010), creating a comprehensive methodology that aligns with current evidence synthesis standards. Researchers conducting scoping reviews in health and social sciences should treat the JBI Manual as their primary methodological reference.
It is worth noting what scoping reviews do not do. They do not assess risk of bias. They do not grade the certainty of evidence. They do not calculate pooled effect estimates. They do not produce clinical recommendations. If your research question requires any of these outputs, a systematic review, not a scoping review, is the appropriate method. For a detailed comparison, see our guide on scoping vs systematic review.
How to Write a Scoping Review, The 6-Stage Framework
The foundational methodology for conducting a scoping review comes from the Arksey and O'Malley framework, published in 2005. Their original paper proposed five mandatory stages and one optional stage. Levac et al. (2010) refined each stage with additional methodological recommendations, and the JBI Manual (Peters et al., 2020) formalized these into institutional guidance. Together, these three sources form the methodological backbone of modern scoping review practice.
The six scoping review steps are: (1) identifying the research question, (2) identifying relevant studies, (3) study selection, (4) charting the data, (5) collating, summarizing, and reporting the results, and (6) consultation with stakeholders. The first five stages are considered mandatory. The sixth, stakeholder consultation, was originally described as optional by Arksey and O'Malley but is now strongly recommended by Levac et al. and the JBI Manual.
Stage 1: Identifying the Research Question
The research question in a scoping review is deliberately broader than the focused question used in a systematic review. Where a systematic review might ask "Does intervention X improve outcome Y in population Z?", a scoping review asks "What is known about topic A in context B?" The breadth is intentional, it allows the review to capture a wide range of evidence types, study designs, and perspectives.
Formulating the question requires the PCC framework: Population, Concept, and Context. This replaces the PICO framework (Population, Intervention, Comparison, Outcome) used in systematic reviews. The PCC framework is broader by design, it does not require specifying an intervention, comparator, or measured outcome, because scoping reviews are not evaluating effectiveness.
A well-constructed PCC question defines three elements clearly. The Population identifies who is being studied, this could be patients, healthcare providers, students, policymakers, or any defined group. The Concept identifies the core phenomenon, intervention, or topic area under investigation. The Context identifies the setting, geographic location, cultural factors, or disciplinary boundaries that frame the review.
For example, a scoping review question using PCC might read: "What is the nature and extent of research on digital health literacy (Concept) among older adults aged 65 and above (Population) in primary care settings (Context)?" This question is broad enough to capture qualitative, quantitative, and mixed-methods studies across different countries and time periods, while remaining focused enough to produce a coherent synthesis.
Levac et al. (2010) recommended that the research question be developed iteratively. You may need to refine it after conducting a preliminary search. If the initial question retrieves an unmanageable volume of results, you can narrow the Concept or Context. If it retrieves too few, you can broaden them. This iterative approach distinguishes scoping review question development from the more rigid approach typical of systematic reviews. To structure your question with PCC/PICO, you can use our free framework generator tool.
Stage 2: Identifying Relevant Studies
The search strategy in a scoping review must be comprehensive, reproducible, and transparent, the same standards applied to systematic reviews. The scoping review uses PCC framework elements to construct a search strategy that captures all potentially relevant literature. A Boolean search strategy retrieves studies from databases by combining Population terms, Concept terms, and Context terms using AND/OR operators.
You should search a minimum of two electronic databases, though three to five is considered best practice. Common choices include PubMed/MEDLINE, CINAHL, PsycINFO, Scopus, Web of Science, and Embase. The specific databases depend on your discipline, education reviews might use ERIC, social work reviews might use Social Services Abstracts, and environmental reviews might use GreenFILE.
Grey literature is a critical component of scoping review searches. Because scoping reviews aim to map the breadth of evidence, they should include dissertations, conference proceedings, government reports, organizational white papers, and preprints. Sources like ProQuest Dissertations and Theses, OpenGrey, and relevant organizational websites should be searched systematically. Grey literature reduces publication bias and captures evidence that may not appear in indexed databases.
The JBI Manual recommends a three-step search strategy. First, conduct an initial limited search of at least two relevant databases to identify keywords contained in the titles and abstracts of relevant articles. Second, conduct a comprehensive search across all selected databases using all identified keywords and index terms. Third, search the reference lists of all included sources for additional relevant studies.
Document every aspect of your search: databases searched, date of search, full search strings, number of results per database, and any filters applied. This documentation is required for PRISMA-ScR reporting and ensures your search can be reproduced by other researchers. Working with a research librarian or information specialist to develop and validate your search strategy is strongly recommended.
Stage 3: Study Selection
Study selection in a scoping review follows the same two-stage screening process used in systematic reviews: title-and-abstract screening followed by full-text screening. Both stages apply predetermined eligibility criteria that flow directly from your PCC question.
Eligibility criteria should specify inclusion and exclusion parameters for each PCC element. For Population, define who is included (e.g., adults aged 65+) and who is excluded (e.g., studies exclusively about children). For Concept, define what counts as relevant (e.g., studies examining digital health literacy) and what does not (e.g., studies about general computer literacy without a health component). For Context, define the setting boundaries (e.g., primary care settings) and exclusions (e.g., hospital inpatient settings). You can use our eligibility criteria tool to structure these parameters systematically.
Levac et al. (2010) emphasized that at least two reviewers should independently screen titles and abstracts, with conflicts resolved through discussion or a third reviewer. This recommendation has been adopted by the JBI Manual and is now considered standard practice. Using reference management software such as Covidence, Rayyan, or EndNote streamlines the screening process and maintains an audit trail.
After title-and-abstract screening, retrieve the full texts of all potentially relevant studies. Apply the same eligibility criteria at full-text level, documenting reasons for exclusion at this stage. The reasons for exclusion are reported in the PRISMA-ScR flow diagram, which provides a transparent visual record of how studies moved through each screening phase.
One important distinction: scoping reviews typically cast a wider net than systematic reviews during screening. Because the goal is breadth rather than precision, borderline studies are more likely to be included than excluded. When in doubt, include the study and let the charting stage reveal whether it contributes meaningfully to the evidence map.
Stage 4: Charting the Data
Data charting is the scoping review equivalent of data extraction in a systematic review, though the approach differs in important ways. Data charting in scoping reviews uses a structured form aligned with PCC elements to systematically record information from each included source. The charting form captures descriptive information, who, what, where, when, and how, rather than outcome data or effect sizes.
A standard data charting form includes fields for: author(s), year of publication, country of origin, study design, population characteristics, concept definition used, context details, key findings relevant to the review question, and any additional variables specific to the review topic. The JBI Manual (Peters et al., 2020) provides a template that researchers can adapt to their specific review.
| Charting Field | Description | Example |
|---|---|---|
| Author, Year | Bibliographic details | Smith et al., 2023 |
| Country | Where the study was conducted | United Kingdom |
| Study Design | Type of study | Cross-sectional survey |
| Population | PCC-P: Who was studied | Adults aged 65-80 (n=450) |
| Concept | PCC-C: What was examined | Digital health literacy assessment |
| Context | PCC-C: Setting/environment | Urban primary care clinics |
| Key Findings | Relevant results | 62% had low digital health literacy scores |
| Implications Noted | Author-stated implications | Training programs needed in primary care |
Unlike systematic review data extraction, where the form is typically finalized before extraction begins, scoping review charting forms should evolve iteratively. Levac et al. (2010) recommended piloting the charting form on 3-5 studies, then refining categories based on what the data reveals. New variables may emerge as charting progresses, for example, you may discover that funding source is an important contextual variable you did not anticipate. The iterative nature of data charting is a defining feature of scoping review methodology.
Two reviewers should independently chart data from each included study. This reduces errors and ensures consistency. Any discrepancies should be resolved through discussion. For large scoping reviews with dozens or hundreds of included sources, dividing the charting workload between reviewers, with a subset charted by both for inter-rater reliability, is a practical compromise endorsed by the JBI Manual.
Stage 5: Collating, Summarizing, and Reporting Results
Stage 5 is where your charted data becomes a coherent evidence map. This stage involves three activities: collating the results into a structured format, summarizing the findings in relation to your research question, and reporting the outcomes using descriptive and visual methods.
Begin by producing a descriptive numerical summary of the included studies. Report the total number of studies included, a breakdown by year of publication (to show trends over time), a breakdown by country or region, a breakdown by study design, and distributions across your PCC elements. Tables and charts are effective for presenting these distributions, a stacked bar chart showing study designs by decade, for example, immediately communicates how research methods have evolved.
Next, provide a narrative summary organized thematically around your PCC elements or around themes that emerged during data charting. Unlike a systematic review, which synthesizes findings to answer a specific question, a scoping review presents findings descriptively. You are mapping the terrain, not drawing conclusions about intervention effectiveness. Describe what the evidence shows, where it clusters, and where gaps exist.
Evidence gaps should be explicitly identified and discussed. Where are the geographic gaps, is all the evidence from high-income countries? What populations are underrepresented? Which study designs dominate, and what alternative designs might strengthen the evidence base? Are there conceptual definitions that vary widely across studies? These gaps are often the most valuable output of a scoping review because they directly inform future research agendas.
Visual evidence maps, tables, charts, bubble plots, and geographic maps, are increasingly used to present scoping review results. A well-designed evidence map allows readers to grasp the distribution of evidence at a glance. For example, a matrix with Population subgroups on one axis and Concept dimensions on the other, with cell values indicating the number of studies, immediately reveals where evidence is concentrated and where it is sparse.
Stage 6: Consultation with Stakeholders
The sixth stage, stakeholder consultation, was included in Arksey and O'Malley's original framework as an optional step, but subsequent methodological guidance has elevated its importance. Levac et al. (2010) argued that consultation should be a required component, and the JBI Manual strongly recommends it.
Stakeholder consultation involves engaging with individuals who have knowledge or experience relevant to the review topic, clinicians, patients, policymakers, educators, or community members, to validate and contextualize your findings. Stakeholders may identify sources of evidence that your search strategy missed, offer interpretive insights that your charting did not capture, and suggest practical implications that emerge from lived experience rather than published research.
The consultation can take many forms: interviews, focus groups, surveys, advisory panel meetings, or informal discussions. The format should match your review's purpose and resources. Document the consultation process, including who was consulted, how they were recruited, what questions were asked, and how their input influenced the review findings.
In our scoping review work, the most common mistake is researchers treating the consultation stage as an afterthought, or skipping it entirely. When conducted thoughtfully, stakeholder consultation transforms a scoping review from a purely academic exercise into a document with practical relevance and grounded interpretation. It is the stage that connects published evidence to real-world context.
PCC vs PICO, Choosing the Right Framework
One of the most critical decisions when planning a scoping review is selecting the correct question formulation framework. The PCC framework (Population, Concept, Context) and the PICO framework (Population, Intervention, Comparison, Outcome) serve different purposes, and using the wrong one undermines the methodological integrity of your review.
| Element | PCC (Scoping Reviews) | PICO (Systematic Reviews) |
|---|---|---|
| P | Population | Population / Problem |
| Second element | Concept (broad topic area) | Intervention (specific treatment/exposure) |
| Third element | Context (setting, culture, geography) | Comparison (alternative or control) |
| Fourth element | Not applicable | Outcome (measurable result) |
| Scope | Broad, exploratory | Narrow, focused |
| Study designs included | All types | Usually RCTs, cohort, case-control |
| Purpose | Map evidence, identify gaps | Answer effectiveness question |
The PICO framework structures research questions around a specific intervention and its measurable outcomes. It presupposes that you know what intervention you are evaluating, what comparison condition is relevant, and what outcome you are measuring. This level of specificity is appropriate for systematic reviews that aim to pool effect sizes and produce clinical recommendations.
The PCC framework, by contrast, does not assume you know the intervention landscape. The "Concept" element is deliberately broad, it can encompass a phenomenon, a policy area, a clinical practice, or a theoretical construct. The "Context" element replaces both Comparison and Outcome with a spatial, cultural, or disciplinary boundary. This breadth is what makes PCC appropriate for scoping reviews, where the goal is exploration rather than evaluation.
In our scoping review work, the most common mistake is researchers using PICO instead of PCC. When a researcher formulates a PICO question for a scoping review, the review becomes too narrow, it excludes qualitative studies, excludes studies that do not measure a specific outcome, and ultimately fails to achieve the breadth that justifies a scoping review in the first place. If your question can be fully answered with PICO, you likely need a systematic review, not a scoping review.
The choice between PCC and PICO also determines your eligibility criteria, your search strategy breadth, and your data charting variables. A PCC-driven eligibility criterion includes any study design that addresses the Concept within the defined Context. A PICO-driven criterion restricts inclusion to studies that measure specific Outcomes of a specific Intervention. These downstream consequences make the framework choice one of the earliest and most consequential decisions in any evidence synthesis project. For a detailed comparison, see our guide on scoping vs systematic review.
Building a Scoping Review Search Strategy
A rigorous search strategy is the backbone of any scoping review. The search must be comprehensive enough to capture the breadth of evidence your PCC question demands, while remaining reproducible and transparent. A poorly constructed search either misses relevant studies (compromising completeness) or retrieves an unmanageable volume of irrelevant results (compromising feasibility).
Begin by breaking your PCC question into its component concepts. Each concept becomes a search block. Within each block, list synonyms, related terms, abbreviations, and spelling variations. For the Population block in a scoping review about digital health literacy among older adults, your terms might include: "older adults" OR "elderly" OR "aged" OR "seniors" OR "geriatric" OR "older people" OR "aged 65 and over." For the Concept block: "digital health literacy" OR "eHealth literacy" OR "electronic health literacy" OR "health information technology literacy." Combine blocks with AND to create the final search string.
Use both free-text terms (searched in title and abstract fields) and controlled vocabulary terms (MeSH headings in PubMed, CINAHL headings in CINAHL, Emtree terms in Embase). Controlled vocabulary ensures you capture studies that use different terminology for the same concept. A study about "digital health competency" might not appear in a free-text search for "digital health literacy" but would be indexed under the same MeSH heading.
The JBI Manual recommends the three-step search process described in Stage 2 above. This iterative approach ensures your search evolves based on what you find. After the initial limited search, review the titles, abstracts, and index terms of relevant articles to identify additional keywords you may have missed. Incorporate these into your comprehensive search.
Apply search limits judiciously. Date limits may be appropriate if your review focuses on a specific time period, for example, studies published after the introduction of a particular technology or policy. Language limits should be avoided if possible, as they introduce bias; if you must limit to English-language publications, acknowledge this as a limitation. Do not apply study design filters in scoping review searches, scoping reviews include all study designs by definition.
Document your search strategy in sufficient detail for replication. Record the full search string for each database, including field tags, Boolean operators, and any limits. Record the date of each search. Record the number of results retrieved per database. This documentation becomes part of your PRISMA-ScR supplementary material and demonstrates the rigor of your approach. You can use PRISMA for scoping reviews to track your reporting requirements throughout the process.
Data Charting in Scoping Reviews
Data charting is the term used in scoping review methodology for what systematic reviewers call data extraction. The terminology difference is deliberate, it signals a different philosophical approach. Where systematic review extraction pulls specific data points (sample sizes, effect sizes, confidence intervals) to answer a focused question, scoping review charting captures descriptive information to map the evidence landscape.
The charting form is the instrument you use to record standardized information from each included study. Design it around your PCC elements, adding variables that are relevant to your research question. The JBI Manual (Peters et al., 2020) recommends including, at minimum: author(s), year of publication, country, aims/purpose, study population and sample size, methodology/study design, concept as defined or operationalized by the study, context, key findings relevant to your review question, and any additional domain-specific variables.
The iterative nature of charting distinguishes scoping reviews from systematic reviews. In a systematic review, the data extraction form is typically piloted and finalized before extraction begins, you know exactly what data points you need because your PICO question specifies them. In a scoping review, the charting form evolves as you encounter the data. You might begin charting and discover that studies operationalize your Concept in three distinct ways you did not anticipate. Add a variable to capture this. You might find that the Context element subdivides naturally into sub-contexts, urban vs. rural, high-income vs. low-income countries, that merit separate tracking.
Pilot your charting form on 3-5 studies before beginning full charting. Have two reviewers independently chart the same pilot studies and compare results. Discrepancies reveal ambiguities in your charting categories that need clarification. After piloting, revise the form and proceed with full charting.
For large scoping reviews with many included studies, charting can be time-consuming. Using a structured spreadsheet (Excel or Google Sheets) or a dedicated data management tool (such as the JBI System for the Unified Management, Assessment and Review of Information, known as JBI SUMARI) helps maintain consistency and facilitates later analysis. Each row represents one study, and each column represents one charting variable. This structure enables easy sorting, filtering, and tabulation during Stage 5.
One common pitfall is attempting to extract outcome data or effect sizes during charting. Scoping reviews do not synthesize outcomes, they describe what was studied, how it was studied, and what was found in descriptive terms. Your charting should capture what the study reported as its main findings, not recalculate or reinterpret its statistical results. This restraint is what keeps a scoping review within its methodological boundaries.
Reporting with PRISMA-ScR
PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) is the reporting guideline specifically developed for scoping reviews. Published by Tricco et al. in 2018 and endorsed by the EQUATOR Network, PRISMA-ScR provides a 22-item checklist that ensures transparent, complete, and reproducible reporting of scoping review findings. The scoping review reports using PRISMA-ScR to meet current publication standards.
The checklist covers every section of a scoping review manuscript. Title items require that the report be identified as a scoping review. Abstract items specify what should be reported in structured abstracts. Introduction items address the rationale and objectives. Methods items cover the protocol, eligibility criteria, information sources, search strategy, selection of sources, data charting, and synthesis methods. Results items address study selection (including the flow diagram), characteristics of included sources, and results of individual sources and syntheses. Discussion items cover summary of evidence, limitations, and conclusions. Funding items ensure transparency about financial support.
| PRISMA-ScR Section | Items | Key Requirements |
|---|---|---|
| Title | 1 | Identify as scoping review |
| Abstract | 2 | Structured abstract with objectives, methods, results |
| Introduction | 3-4 | Rationale and objectives |
| Methods | 5-12 | Protocol, eligibility, sources, search, selection, charting, synthesis |
| Results | 13-17 | Selection flow, study characteristics, individual results, synthesis |
| Discussion | 18-20 | Summary, limitations, conclusions |
| Funding | 21-22 | Funding source and role of funder |
The PRISMA-ScR flow diagram is a visual representation of the study selection process. It shows the number of records identified from each source, the number after duplicate removal, the number screened at title-and-abstract level, the number assessed at full-text level, the number excluded at full-text level (with reasons), and the final number of included sources. This flow diagram is not optional, it is a core reporting requirement. For guidance on creating one, see our scoping review checklist guide.
Registration of your scoping review protocol adds credibility and prevents duplication. PROSPERO (the International Prospective Register of Systematic Reviews) has accepted health-related scoping reviews since 2023. For scoping reviews outside the health domain, the Open Science Framework (OSF) provides a suitable registration platform. Registration should occur before screening begins and should include your research question, eligibility criteria, search strategy, and planned approach to data charting and synthesis.
Many journals now require PRISMA-ScR compliance as a condition of manuscript submission. Even when not explicitly required, using PRISMA-ScR strengthens your manuscript by demonstrating methodological rigor. Reviewers and editors look for PRISMA-ScR adherence as a signal of quality. Our PRISMA for scoping reviews tool lets you track each checklist item as you write your manuscript.
Common Scoping Review Mistakes
After supporting hundreds of evidence synthesis projects, including many scoping reviews through our scoping review service, we have identified patterns of mistakes that recur across disciplines, institutions, and career stages. Avoiding these mistakes will save months of rework and improve the quality of your final publication.
Mistake 1: Using PICO instead of PCC. This is the single most common methodological error. Researchers trained in systematic review methods default to PICO, which narrows the review to specific interventions and outcomes. A scoping review using PICO will miss qualitative studies, theoretical papers, policy analyses, and other evidence types that are essential to mapping the full evidence landscape. Always use PCC for scoping review question formulation.
Mistake 2: Assessing quality of included studies. Scoping reviews do not require formal risk of bias or quality assessment. The JBI Manual (Peters et al., 2020) is explicit on this point. Researchers sometimes add quality assessment because they assume all evidence synthesis must include it, but doing so conflates scoping review methodology with systematic review methodology. If you find yourself wanting to assess quality, ask whether your question actually requires a systematic review.
Mistake 3: Attempting to synthesize outcomes. Scoping reviews describe and map evidence, they do not pool data, calculate effect sizes, or draw conclusions about intervention effectiveness. If your results section reads like a systematic review (e.g., "The pooled odds ratio was 1.45, 95% CI 1.12-1.88"), you have crossed a methodological boundary. Present findings descriptively: how many studies addressed each concept, what populations were studied, what contexts were represented, and what the studies reported.
Mistake 4: Inadequate search strategy. A scoping review demands the same search rigor as a systematic review. Searching only one database, omitting grey literature, or failing to use controlled vocabulary terms compromises the review's comprehensiveness. The JBI Manual recommends at least two databases plus grey literature sources, and the three-step search strategy ensures you capture relevant terms you may not have anticipated.
Mistake 5: Treating data charting as a one-time event. Data charting in scoping reviews is iterative. If you finalize your charting form before seeing the data and refuse to adapt it, you will miss important variables that emerge from the literature. Pilot your form, revise it, and continue refining as needed. This iterative approach is endorsed by Levac et al. (2010) and is a methodological feature, not a flaw.
Mistake 6: Skipping the stakeholder consultation stage. While Arksey and O'Malley described this as optional, contemporary guidance treats it as a critical component. Stakeholder consultation grounds your findings in real-world context, identifies evidence your search may have missed, and ensures your review has practical relevance beyond the academic literature. Even a brief consultation with key informants strengthens the review substantially.
Mistake 7: Not registering the protocol. Failing to register your scoping review protocol on PROSPERO or OSF before beginning screening is a missed opportunity. Registration demonstrates methodological planning, prevents unnecessary duplication, and is increasingly expected by journals and peer reviewers. The registration process also forces you to formalize your methods, which improves the quality of your protocol.
Mistake 8: Poor reporting. Using PRISMA 2020 (designed for systematic reviews) instead of PRISMA-ScR (designed for scoping reviews) is a common reporting error. Each reporting guideline is tailored to its review type. PRISMA-ScR includes items specific to scoping reviews, such as data charting, and omits items irrelevant to scoping reviews, such as risk of bias assessment. Use the correct guideline for your review type.
By understanding these common pitfalls and the six-stage framework described in this guide, you are well positioned to conduct a methodologically rigorous scoping review that contributes meaningfully to your field. Whether you are mapping the evidence on a new health intervention, exploring the scope of educational research on a particular pedagogy, or charting the policy landscape around an emerging issue, the Arksey and O'Malley framework, refined by Levac et al. and codified by the JBI Manual, provides the methodological foundation you need.
For researchers who want professional support throughout the scoping review process, Research Gold offers end-to-end scoping review services, from protocol development and registration through search strategy design, screening, data charting, and manuscript preparation. Every deliverable is PRISMA-ScR compliant and aligned with JBI methodology. Learn about our scoping review service or get a quote to discuss your project.