A mixed methods systematic review is a type of evidence synthesis that systematically identifies, appraises, and integrates both quantitative and qualitative research to answer a complex question that neither study type can address independently. Unlike traditional quantitative-only or qualitative-only reviews, a mixed methods approach combines numerical outcome data with experiential, contextual, and process-oriented findings. The JBI Mixed Methods Methodology group defines this synthesis type as one that "considers the complementarity of quantitative evidence of effectiveness and qualitative evidence of experience, feasibility, and meaningfulness." Hong et al. (2018) formalized a widely adopted framework distinguishing convergent, sequential, and multi-stage integration designs. This guide provides a practical, step-by-step approach for researchers conducting their first mixed methods systematic review, from formulating an appropriate research question through to final reporting with PRISMA 2020 compliance.
When a Mixed Methods Approach Is the Right Choice
Not every research question requires a mixed methods systematic review. Choosing this design adds complexity to your protocol, so understanding when it genuinely adds value prevents unnecessary methodological burden.
Complex interventions are the most common trigger for a mixed methods synthesis. When you need to evaluate both whether an intervention works (quantitative effectiveness data) and how or why it works (qualitative process data), a single-method review will only capture half the picture. Sandelowski et al. (2006) demonstrated this clearly in their landmark work integrating quantitative HIV adherence outcomes with qualitative patient experience findings, showing that the combined synthesis revealed barriers and facilitators invisible to either approach alone.
Implementation questions are another strong candidate. If your review asks not just "does this work?" but also "what helps or hinders implementation in real-world settings?", you need both outcome studies and qualitative implementation research. The Cochrane Qualitative and Implementation Methods Group (QIMG) has published guidance on exactly this scenario, recommending mixed methods synthesis when policy decisions require understanding context alongside effectiveness.
Health services research frequently benefits from the mixed methods approach because healthcare delivery involves human behavior, organizational systems, and clinical outcomes simultaneously. A review of a patient education program, for example, needs randomized trial data on clinical outcomes alongside interview-based studies exploring how patients experienced and interpreted the educational content.
Questions that are purely about treatment effect sizes, diagnostic accuracy, or prevalence estimates do not require mixed methods synthesis. If your question can be answered entirely with numerical data and statistical pooling, a standard quantitative systematic review or meta-analysis is more efficient and methodologically cleaner.
Integration Designs: Convergent, Sequential, and Multi-Stage
The integration design determines how and when you combine your quantitative and qualitative streams. Hong et al. (2018) described three primary designs, each suited to different research questions and resource constraints.
Convergent Design
In a convergent design, you conduct the quantitative and qualitative synthesis streams simultaneously and integrate the findings at the interpretation stage. Both streams use the same search strategy and eligibility criteria, and you synthesize each stream independently before bringing them together in a final integration step. This is the most common design and works well when your quantitative and qualitative questions are closely related. Pluye et al. (2009) used a convergent approach to examine primary healthcare innovations, synthesizing effectiveness data and qualitative implementation evidence in parallel before mapping the combined findings into a unified framework.
The main advantage of convergent synthesis is efficiency. You run one search, screen once, and extract data from all included studies simultaneously. The challenge is that the integration step requires careful methodological planning. You must decide in advance how you will compare, contrast, and combine findings from two fundamentally different evidence types.
Sequential Design
In a sequential design, one synthesis stream informs the other. The most common sequence is quantitative-first: you conduct a meta-analysis or quantitative synthesis, identify gaps, unexplained heterogeneity, or unexpected findings, and then conduct a qualitative synthesis specifically designed to explore those gaps. Heyvaert et al. (2013) outlined the theoretical underpinnings of this approach, arguing that sequential designs are particularly valuable when quantitative results raise "why" questions that only qualitative evidence can answer.
The reverse sequence (qualitative-first) is less common but equally valid. You might start with a qualitative synthesis to identify relevant constructs, barriers, or facilitators, and then design your quantitative synthesis to test whether those constructs are reflected in outcome data.
Multi-Stage Design
A multi-stage design combines elements of both convergent and sequential approaches across multiple phases. This design is most appropriate for large-scale evidence syntheses addressing broad policy questions where the research landscape includes diverse study types and multiple sub-questions. Multi-stage designs are resource-intensive and typically require a dedicated review team with expertise in both quantitative and qualitative methods.
Quality Appraisal With the Mixed Methods Appraisal Tool
Appraising the quality of studies in a mixed methods systematic review is uniquely challenging because you must evaluate quantitative, qualitative, and mixed methods primary studies, each with different validity criteria.
The Mixed Methods Appraisal Tool (MMAT), developed by Pluye et al. (2009) and updated through multiple iterations, is the most widely used critical appraisal instrument for mixed methods reviews. The MMAT provides a single, integrated framework with five categories of study designs: qualitative research, randomized controlled trials, non-randomized studies, quantitative descriptive studies, and mixed methods studies. Each category includes specific appraisal criteria tailored to that design.
For qualitative studies, the MMAT assesses whether the qualitative approach is appropriate to the research question, whether data collection methods are adequate, whether findings are adequately derived from the data, whether the interpretation of results is sufficiently substantiated, and whether there is coherence between the data sources, collection, analysis, and interpretation.
For quantitative randomized trials, the MMAT evaluates randomization procedure, group comparability at baseline, outcome data completeness, blinding of outcome assessors, and whether participants adhered to the assigned intervention.
For mixed methods primary studies, the MMAT uniquely evaluates the rationale for using a mixed methods design, the effectiveness of integrating the qualitative and quantitative components, and whether the outputs of integration are adequately interpreted.
You can use the JBI critical appraisal tools alongside or instead of the MMAT for individual study designs, particularly when you want more granular assessment criteria for specific study types. Some review teams use JBI checklists for the quantitative and qualitative components and the MMAT specifically for appraising mixed methods primary studies.
A critical decision is whether to exclude studies based on quality scores. The MMAT developers explicitly advise against using overall quality scores to exclude studies, recommending instead that quality assessment results be used to inform the interpretation and weighting of evidence during synthesis.