Research Gold
ServicesPricingHow It WorksFree ToolsSamplesAboutFAQ
LoginGet Started
Research Gold

Professional evidence synthesis support for researchers, clinicians, and academic institutions worldwide.

6801 Gaylord Pkwy
Frisco, TX 75034, USA

Company

  • About
  • Blog
  • Careers

Services

  • Systematic Review
  • Scoping Review
  • Meta-Analysis
  • Pricing

Resources

  • PRISMA Guide
  • Samples
  • FAQ
  • How It Works

Legal

  • Privacy Policy
  • Terms of Service
  • Refund Policy
  • NDA Agreement

© 2026 Research Gold. All rights reserved.

PrivacyTerms
All Resources

JBI Critical Appraisal Tool

Free

Assess methodological quality using the Joanna Briggs Institute critical appraisal checklists. Supports RCTs, cohort, cross-sectional, and qualitative study designs with scoring summaries and CSV export.

How to Use

Select a study design tab, add your studies, and answer each checklist item using the radio buttons. Each item can be rated as Yes, No, Unclear, or N/A. The summary shows counts and an overall recommendation based on the proportion of "Yes" responses.

Study 1

Q1.Was true randomization used for assignment of participants to treatment groups?

Q2.Was allocation to treatment groups concealed?

Q3.Were treatment groups similar at the baseline?

Q4.Were participants blind to treatment assignment?

Q5.Were those delivering treatment blind to treatment assignment?

Q6.Were outcomes assessors blind to treatment assignment?

Q7.Were treatment groups treated identically other than the intervention of interest?

Q8.Was follow-up complete, and if not, were differences between groups in terms of their follow-up adequately described and analyzed?

Q9.Were participants analyzed in the groups to which they were randomized?

Q10.Were outcomes measured in the same way for treatment groups?

Q11.Were outcomes measured in a reliable way?

Q12.Was appropriate statistical analysis used?

Q13.Was the trial design appropriate, and any deviations from the standard RCT design accounted for in the conduct and analysis?

Study 2

Q1.Was true randomization used for assignment of participants to treatment groups?

Q2.Was allocation to treatment groups concealed?

Q3.Were treatment groups similar at the baseline?

Q4.Were participants blind to treatment assignment?

Q5.Were those delivering treatment blind to treatment assignment?

Q6.Were outcomes assessors blind to treatment assignment?

Q7.Were treatment groups treated identically other than the intervention of interest?

Q8.Was follow-up complete, and if not, were differences between groups in terms of their follow-up adequately described and analyzed?

Q9.Were participants analyzed in the groups to which they were randomized?

Q10.Were outcomes measured in the same way for treatment groups?

Q11.Were outcomes measured in a reliable way?

Q12.Was appropriate statistical analysis used?

Q13.Was the trial design appropriate, and any deviations from the standard RCT design accounted for in the conduct and analysis?

How to Use This Tool

1

Select Study Design

Choose the checklist that matches your study type: RCTs, cohort, cross-sectional, or qualitative studies.

2

Add Studies

Enter each study name and respond to every checklist item with Yes, No, Unclear, or Not Applicable.

3

Review Summary

Check the appraisal summary table showing counts, quality scores, and include/exclude recommendations per study.

4

Export Results

Copy results to clipboard or export as a CSV file for inclusion in your systematic review manuscript.

Key Takeaways for Critical Appraisal

Match the checklist to the study design

JBI provides separate checklists for different methodologies because quality criteria vary between study types. Using a cohort checklist for an RCT, or vice versa, would miss design-specific quality indicators and lead to inaccurate appraisals.

Multiple study designs in one review

Mixed-methods systematic reviews often include RCTs, cohort studies, and qualitative research. Appraise each study with its appropriate JBI checklist, then synthesize findings while being transparent about quality differences across designs.

Use scoring as a guide, not a rule

While percentage-based scoring provides a quick quality summary, consider which specific items received 'No' or 'Unclear' ratings. A study scoring 80% but failing on randomization may be more problematic than one scoring 70% with minor reporting gaps.

Report appraisal transparently

Include a complete critical appraisal table in your review showing item-level responses for each study. This allows readers to assess whether your quality judgments are appropriate and facilitates replication of your review methodology.

Critical Appraisal Across Study Designs: The JBI Approach

Critical appraisal is the systematic evaluation of research evidence to judge its trustworthiness, relevance, and applicability — and no single instrument can adequately assess every study design. The JBI critical appraisal tool, developed and maintained by the Joanna Briggs Institute (Aromataris & Munn, 2020), addresses this challenge by providing a suite of design-specific checklists that share a common scoring philosophy while tailoring their questions to the methodological features that matter most for each study type. A JBI checklist online allows reviewers to evaluate randomized controlled trials, cohort studies, analytical cross-sectional studies, prevalence studies, case reports, case series, and — uniquely among major appraisal frameworks — qualitative research. This breadth makes the JBI system indispensable for mixed-methods and comprehensive systematic reviews that synthesize evidence from heterogeneous study designs. The CASP (Critical Appraisal Skills Programme) checklists serve as a complementary appraisal framework that is particularly popular in the United Kingdom and in health services research, offering streamlined question sets for qualitative studies, cohort studies, and RCTs. For reviews that combine quantitative and qualitative evidence, the Mixed Methods Appraisal Tool (MMAT) developed by Hong et al. (2018) provides a single instrument capable of appraising all empirical study designs within one unified framework.

The qualitative study appraisal tool within the JBI framework deserves particular attention because qualitative evidence plays an increasingly important role in evidence synthesis. The JBI Checklist for Qualitative Research contains 10 items addressing the congruity between the stated philosophical perspective, the methodology, the research question, the data collection methods, the data analysis approach, and the interpretation of results. It also evaluates whether the researcher's cultural and theoretical positioning is clearly stated — a reflexivity assessment that is absent from quantitative appraisal tools and is essential for judging the credibility of interpretive research. When synthesizing qualitative findings across studies, the CERQual (Confidence in Evidence from Reviews of Qualitative Research) approach provides a structured method for rating the certainty of qualitative evidence, analogous to the GRADE framework used for quantitative reviews. For quantitative designs, the JBI RCT checklist assesses randomization, allocation concealment, blinding, and completeness of follow-up, while the cohort checklist evaluates group comparability, exposure measurement, and confounding control. Each item is answered as Yes, No, Unclear, or Not Applicable, and the proportion of Yes responses yields a summary quality score.

The JBI Manual for Evidence Synthesis (Aromataris & Munn, 2020) recommends that two independent reviewers complete the critical appraisal for every included study, with disagreements resolved by discussion or referral to a third reviewer. While no universal cutoff score is mandated, a common convention considers studies with 70% or more Yes responses as high quality, 50-69% as moderate, and below 50% as low quality. However, reviewers should look beyond the aggregate score and consider which specific items were rated No or Unclear — a study achieving 80% overall but failing on randomization concealment may introduce more bias than one scoring 70% with only minor reporting gaps. PRISMA 2020 (Page et al., 2021) requires transparent reporting of all critical appraisal results, and the Cochrane Handbook (Higgins et al., 2023) emphasizes that quality assessment findings should directly inform the interpretation of review results, including decisions about GRADE certainty of evidence ratings. SUMARI (JBI's System for the Unified Management, Assessment and Review of Information) integrates critical appraisal directly into the review management workflow, allowing teams to conduct JBI assessments within the same platform used for screening and extraction.

Selecting the most appropriate appraisal instrument requires matching the tool to the study design and the level of granularity needed. For randomized trials where domain-level bias judgments are required by journal or funder guidelines, the Cochrane RoB 2 assessment tool provides a five-domain evaluation with traffic light visualization. For non-randomized studies of interventions, the ROBINS-I bias assessment framework offers seven domains with signaling questions tailored to observational comparative designs. When a simpler star-based scoring system is preferred for cohort or case-control studies, the Newcastle-Ottawa Scale calculator provides a widely recognized alternative. The JBI checklists complement all of these tools by covering design categories that the others do not address — qualitative research, prevalence studies, and case series — making them an essential component of any comprehensive systematic review toolkit. Once your appraisal is complete, record all quality scores alongside study characteristics in your data extraction template to keep all evidence in a single, auditable document.

Frequently Asked Questions

What are JBI critical appraisal checklists?

JBI (Joanna Briggs Institute) critical appraisal checklists are standardized tools developed to assess the methodological quality of different study designs. Each checklist contains a series of questions addressing key aspects of study rigor, such as randomization, blinding, confounding, and appropriate statistical analysis. They are widely used in systematic reviews to determine whether a study meets minimum quality thresholds for inclusion.

How do JBI checklists differ from Cochrane RoB 2?

RoB 2 is specifically designed for randomized controlled trials and focuses on risk of bias across five predefined domains. JBI checklists cover a broader range of study designs including qualitative research, cross-sectional studies, cohort studies, prevalence studies, and RCTs. While RoB 2 uses domain-level judgments (low, some concerns, high), JBI checklists use item-level responses (Yes, No, Unclear, N/A) and calculate an overall quality score.

How do I choose the right JBI checklist?

Select the checklist that matches your study design. Use the RCT checklist for randomized trials, the cohort checklist for longitudinal observational studies comparing exposed and unexposed groups, the cross-sectional checklist for prevalence or survey-based studies, and the qualitative checklist for studies using interviews, focus groups, or ethnographic methods. If your review includes multiple study designs, appraise each study with the appropriate checklist.

How should I interpret the JBI scoring?

There is no universal cut-off score mandated by JBI. A common approach is to calculate the percentage of 'Yes' responses out of applicable items (excluding N/A). Studies with 70% or more 'Yes' responses are generally considered high quality, those between 50-69% may warrant further scrutiny, and those below 50% are often considered low quality. However, reviewers should also consider which specific items were rated 'No' or 'Unclear' and their relevance to the review question.

How do I cite JBI checklists in my manuscript?

Cite the JBI critical appraisal tools as follows: 'Methodological quality was assessed using the JBI Critical Appraisal Checklist for [study design] (Joanna Briggs Institute, 2020).' Reference the JBI Manual for Evidence Synthesis, available at https://jbi-global-wiki.refined.site/space/MANUAL. Include the specific checklist version and the number of items used. Report the appraisal results in a summary table showing each study's item-level responses and overall scores.

What is the difference between JBI and Cochrane critical appraisal tools?

JBI provides design-specific checklists for 13+ study types (RCTs, cohort, cross-sectional, qualitative, prevalence, case reports, etc.), while Cochrane’s RoB 2 focuses specifically on randomized trials. JBI checklists use a simpler Yes/No/Unclear/Not Applicable format, making them quicker to complete. JBI tools are particularly useful for mixed-design systematic reviews and scoping reviews.

How do I score a JBI critical appraisal checklist?

Each JBI item is rated Yes, No, Unclear, or Not Applicable. There is no standard numeric score — JBI recommends reporting the number of “Yes” responses relative to the total applicable items (e.g., 7/10). Some review teams set a minimum threshold (e.g., ≥50% Yes) for inclusion, but JBI advises using the appraisal to inform synthesis decisions rather than as a strict inclusion cutoff.

Can I use JBI checklists for scoping reviews?

Yes. Although critical appraisal is optional in scoping reviews (PRISMA-ScR item 12), the JBI Manual for Evidence Synthesis recommends including it when feasible. JBI’s design-specific checklists allow scoping reviews to appraise diverse study types — qualitative, quantitative, and mixed methods — within a single review framework.

Related Research Tools

For randomized controlled trials, complement your JBI appraisal with a domain-level assessment using our Cochrane RoB 2 assessment tool, which generates publication-ready traffic light plots. For observational cohort and case-control studies, score methodological quality with the Newcastle-Ottawa Scale calculator. If you are conducting a scoping review, ensure reporting completeness with our PRISMA-ScR reporting checklist.

Need Expert Quality Assessment?

Our team can conduct thorough critical appraisal with dual independent rating and consensus resolution using JBI, RoB 2, ROBINS-I, or any validated quality assessment tool.

Explore Services View Pricing