Generate a publication-ready methods section for your systematic review or meta-analysis by selecting your methodology across 12 categories.
Fill in the sections below to generate your methods paragraph. The text updates as you make selections.
Choose which of the 12 standardized sections to include in your methods paragraph. Options range from search strategy and screening to statistical analysis and sensitivity analyses, covering the full PRISMA 2020 methods checklist.
Specify your statistical model (fixed-effect or random-effects), effect measure (odds ratio, risk ratio, standardized mean difference), heterogeneity estimator (DerSimonian-Laird, REML, Paule-Mandel), and software used for analysis.
For each selected section, fill in specific parameters such as database names, date ranges, number of reviewers, risk of bias tool, screening software, and publication bias assessment methods. The tool pre-populates common choices.
Review the assembled methods paragraph in real time as you complete each section. The generator combines your selections into grammatically correct, journal-ready prose following PRISMA 2020 reporting standards.
Edit the generated text directly to match your specific journal requirements, add protocol registration numbers, or adjust phrasing for your target audience. The preview updates instantly as you modify parameters.
Copy the final methods paragraph to your clipboard with one click for pasting into your manuscript. The text is formatted for direct inclusion in any systematic review or meta-analysis paper without further editing.
Need this done professionally? Get a complete systematic review or meta-analysis handled end-to-end.
Get a Free QuoteThe generator covers every component required by PRISMA 2020, from protocol registration and search strategy through data extraction, risk of bias assessment, statistical analysis, and sensitivity analyses. No critical reporting element is overlooked.
All generated text follows the terminology and phrasing recommended by the PRISMA 2020 statement (Page et al., 2021). This ensures your methods section meets journal expectations and passes reviewer scrutiny for reporting completeness.
Once you select your statistical model and heterogeneity estimator, the tool automatically generates the correct technical description including effect measure, pooling approach, between-study variance estimation, and confidence interval construction method.
Manual methods writing frequently omits critical details such as conflict resolution procedures, software versions, or heterogeneity interpretation thresholds. The structured approach ensures every required element is addressed systematically.
The full pipeline is addressed: database selection, date ranges, screening procedures, extraction forms, risk of bias tools, quantitative synthesis, subgroup analyses, publication bias tests, and certainty of evidence assessments.
The generated paragraph serves as a high-quality starting draft rather than a rigid template. You retain full control to customize wording, add study-specific details, insert protocol registration numbers, and adapt the text for your target journal.
The methods section is the foundation upon which readers assess the validity, reproducibility, and trustworthiness of a systematic review. The PRISMA 2020 statement (Page et al., 2021) dedicates 15 of its 27 checklist items to methods reporting, reflecting the consensus that methodological transparency is the single most important quality marker. When methods are reported incompletely, readers cannot distinguish rigorous evidence synthesis from narrative literature summaries, and the review loses its capacity to inform clinical decisions or policy.
Journals expect authors to justify their analytical choices: why a random-effects model was selected over fixed-effect, why REML was preferred over DerSimonian-Laird, and which sensitivity analyses were pre-planned. The Cochrane Handbook (Higgins et al., 2023) provides detailed guidance on each decision, and aligning your methods with these reporting standards signals methodological competence to reviewers. Common reasons for rejection include insufficient search strategy detail, failure to describe the screening process, and incomplete heterogeneity reporting. Our search strategy builder and PRISMA screening checklist complement this tool by helping you develop the underlying methodology.
Protocol registration on PROSPERO creates a time-stamped record of pre-specified methods that protects against outcome reporting bias. The PROSPERO registration formatter helps structure your protocol, and studies show that reviews with prospective registration demonstrate higher methodological quality scores (Sideri et al., 2018). The GRADE evidence assessment tool supports the certainty-of-evidence step by guiding teams through risk of bias, inconsistency, indirectness, imprecision, and publication bias.
A well-written methods paragraph generator saves researchers hours of writing time while reducing the risk of omitting required elements. The typical systematic review methods section ranges from 800 to 1,500 words and must cover protocol registration, eligibility criteria, information sources, search strategy, selection process, data collection, study risk of bias, effect measures, synthesis methods, reporting bias assessment, and certainty assessment. Missing any one of these invites reviewer criticism and potential desk rejection.
The statistical analysis subsection requires particular precision. Reviewers expect to see the exact effect measure (odds ratio, risk ratio, mean difference, or standardized mean difference), the pooling model and its justification, the between-study variance estimator, the method for constructing confidence intervals, and the software with version number. The Cochrane Handbook Chapter 10 specifies that authors should report Cochran Q, I-squared with its confidence interval, tau-squared, and prediction intervals when applicable. Our forest plot generator produces these statistics alongside the visualization.
Risk of bias reporting must specify the assessment tool (RoB 2 for trials, ROBINS-I for non-randomized studies, Newcastle-Ottawa Scale for observational studies), the number of assessors, and how disagreements were resolved. The results of risk of bias assessment feed directly into the GRADE certainty rating and influence the interpretation of the pooled estimate. Our risk of bias visualization tool helps present these results clearly in your manuscript.
Publication bias assessment should be described prospectively in the methods, including the minimum number of studies required for funnel plot analysis (typically 10), the statistical tests applied (Egger regression, Begg rank correlation, or trim-and-fill), and any sensitivity analyses planned if asymmetry is detected. Use our funnel plot generator to perform these assessments and obtain the statistics needed for your methods paragraph.
Finally, the PRISMA 2020 reporting framework emphasizes that deviations from the registered protocol must be documented transparently. If your final analysis differs from the pre-specified plan (such as adding post-hoc subgroup analyses or changing the primary effect measure), state this explicitly in the methods with justification. Transparent reporting of deviations builds reader trust and demonstrates intellectual honesty, which is far more credible than presenting all analyses as if they were planned from the outset.
A complete methods section should describe the protocol registration, literature search strategy (databases, date range, restrictions), study screening and selection process (tools, number of reviewers, conflict resolution), data extraction procedures, risk of bias assessment, statistical analysis methods (effect measure, model, software), heterogeneity assessment, publication bias evaluation, and certainty of evidence framework. The PRISMA 2020 statement provides a detailed checklist of items that should be reported in the methods section of any systematic review.
The methods section typically ranges from 800 to 1,500 words for a standard systematic review and meta-analysis. The length depends on the complexity of the review: network meta-analyses and reviews with multiple subgroup analyses require more detailed reporting. Journals may impose word limits, but the priority should always be providing enough detail for another researcher to replicate your review. Many journals allow supplementary materials for extended methodological details such as full search strategies and sensitivity analysis plans.
The methods section should be drafted before conducting the review as part of your protocol development, then refined after completion to reflect any deviations. Writing methods prospectively (during protocol registration on PROSPERO or protocol publication) ensures that analytical decisions are pre-specified rather than data-driven. After completing the review, update the methods to accurately describe what was actually done, and document any deviations from the original protocol with justification.
A fixed-effect model assumes all studies estimate a single common true effect, and observed differences are due solely to sampling error. A random-effects model assumes the true effect varies across studies due to clinical, methodological, or contextual differences, and accounts for both within-study and between-study variance. Random-effects models are more conservative (produce wider confidence intervals) and are generally preferred when clinical heterogeneity is expected. The choice should be justified in the methods section based on the anticipated similarity of included studies.
The choice depends on your study designs. Use RoB 2 for randomized controlled trials, ROBINS-I for non-randomized studies of interventions, ROBINS-E for non-randomized studies of exposures, QUADAS-2 for diagnostic test accuracy studies, the Newcastle-Ottawa Scale for observational cohort and case-control studies, JBI checklists for various qualitative and quantitative designs, QUIPS for prognostic factor studies, and AMSTAR-2 for appraising systematic reviews in umbrella reviews. The tool must match the study design of your included studies.
Report the specific statistics you used: the I-squared statistic (percentage of variability due to heterogeneity rather than chance), Cochran Q test (statistical test for heterogeneity), tau-squared (estimated between-study variance), and prediction intervals (range of true effects across different settings). Also describe any pre-specified subgroup analyses or meta-regression planned to explore heterogeneity sources. Provide thresholds for interpretation (e.g., I-squared values of 25%, 50%, and 75% representing low, moderate, and substantial heterogeneity per the Cochrane Handbook).
Format your review protocol for PROSPERO registration using our PROSPERO registration formatter, which guides you through all required and optional fields. Verify your manuscript against the complete reporting checklist with the PRISMA screening checklist. Build comprehensive database queries using the search strategy builder, and assess the certainty of your evidence using the GRADE evidence assessment tool.
Reviewed by
Dr. Sarah Mitchell holds a PhD in Biostatistics from Johns Hopkins Bloomberg School of Public Health and has over 15 years of experience in systematic review methodology and meta-analysis. She has authored or co-authored 40+ peer-reviewed publications in journals including the Journal of Clinical Epidemiology, BMC Medical Research Methodology, and Research Synthesis Methods. A former Cochrane Review Group statistician and current editorial board member of Systematic Reviews, Dr. Mitchell has supervised 200+ evidence synthesis projects across clinical medicine, public health, and social sciences. She reviews all Research Gold tools to ensure statistical accuracy and compliance with Cochrane Handbook and PRISMA 2020 standards.
Whether you have data that needs writing up, a thesis deadline approaching, or a full study to run from scratch, we handle it. Average turnaround: 2-4 weeks.