Pluye, P., Robert, E., Cargo, M., Bartlett, G., O’Cathain, A., Griffiths, F., Boardman, F., Gagnon, M.P., & Rousseau, M.C. (2011). Proposal: A mixed methods appraisal tool for systematic mixed studies reviews. Retrieved from http://mixedmethodsappraisaltoolpublic.pbworks.com. Archived by WebCite® at http://www.webcitation.org/5tTRTc9yJ
Relevance for Public Health
Criteria for the MMAT were developed based on a thematic analysis of the quality appraisal processes that were conducted in 17 health-related systematic mixed studies reviews. This tool reflects best practices in mixed methods research at the time of the tool's development. This tool is well suited to a public health context, particularly for questions related to complex interventions that are context-dependant and process-oriented. Systematic mixed studies reviews leverage the strengths of qualitative and quantitative evidence, which allows for a comprehensive understanding of public health issues and solutions.
The Mixed Methods Appraisal Tool (MMAT) is a checklist that was developed to provide a quality appraisal tool for quantitative, qualitative and mixed methods studies included in systematic mixed studies reviews (Pluye & Hong, 2014). Compared to other tools, the MMAT specifically includes criteria for appraising mixed methods studies. While critical appraisal tools are more widely available for quantitative and qualitative research, there has not been consensus on quality criteria for mixed methods research (O'Cathain et al., 2008). Mixed methods research is important because it provides both breadth and depth of evidence for specific questions of interest. Quantitative methods provide statistical evidence to draw generalizable descriptions and causal inferences, whereas qualitative methods provide foundational or additional contextual, process and experiential evidence.
Whenever possible, good quality syntheses of the evidence, rather than single studies, should be used for public health practice and policy decisions with regard to prevention, diagnosis and treatment of health problems, and health promotion. Mixed studies reviews synthesize quantitative, qualitative and mixed methods studies that relate to a particular question. As with other types of syntheses, reviews of mixed studies save time and effort because the different quantitative, qualitative and mixed methods studies have already been found and the findings summarized for specific questions of interest (Ciliska, 2012).
The Mixed Methods Appraisal Tool (MMAT) provides a set of criteria for concomitantly appraising the methodological quality of quantitative, qualitative and mixed methods studies included in a mixed studies review. The criteria for the initial tool was based on
(a) the theoretical work of Ian Hacking, a philosopher of sciences and leader of the social constructionism movement and
(b) a review of 17 health-related systematic mixed studies reviews (Pluye et al., 2009).
The criteria for this 2011 version of the MMAT were further informed by methodological experts and workshops with national and international researchers. The MMAT checklist includes screening questions which are applied across all relevant studies. There are 19 items to assess the quality of five different types of studies (qualitative research, randomized controlled trials, non-randomized studies, quantitative descriptive studies, and mixed methods studies). An overall quality score can be ascertained using this tool for each included study. Given that such score may not be informative, users can also create a descriptive summary using MMAT criteria.
Accessing the Method/Tool
Implementing the Method/Tool
Time for Participation/Completion
10 to 30 minutes
The time required to appraise the studies included in a mixed studies review will depend in part on the number of individual studies within a review. In Pace et al.'s (2012) reliability study, it was shown to take an average of 14 minutes to appraise each single study in a mixed study review (excluding the initial reading of articles). Souto et al.'s (2014) results reported that it took an average of 11 minutes to appraise each single study.
Additional Resources and/or Skills Needed for Implementation
While no additional resources are needed to implement this tool, the authors developed a wiki page that features resources (the MMAT checklist, the MMAT manual, an FAQ page, and a link to a toolkit for designing, conducting and reporting mixed studies reviews) to provide guidance on how to use the tool and to enable collaborative work for developing the MMAT.
Steps for Using Method/Tool
- Apply the screening questions for all studies.
- For each relevant study, determine the type of design, and use the corresponding criteria to appraise a study's quality.
- Two independent reviewers should conduct the appraisal process. Reviewers should meet before the appraisal process to share their understanding of the MMAT checklist and manual to increase the rigor of the quality appraisals.
- To determine an overall quality score for each study (if needed), divide the number of criteria met by the total number of criteria as per the appropriate methodological quality criteria for quantitative, qualitative and mixed methods studies.
Who is involved
Systematic review authors who are interested in appraising and reporting on the quality of studies included in their mixed studies reviews could use this tool. The tool may also be useful for knowledge users who would like to appraise the quality of primary studies with diverse designs (qualitative, quantitative and mixed methods research designs).
Conditions for Use
Evaluation and Measurement Characteristics
Has been evaluated.
While the MMAT is still currently in development and therefore should be used with caution, it has gone through a number of iterative evaluations since the development of the initial MMAT version in 2009.
Validity properties meet accepted standards.
One evaluation of the MMAT's validity has been published to date (Pluye, 2009). This evaluation used a theoretical model and qualitative thematic data analysis of the quality appraisal procedures used in 17 mixed systematic reviews in health sciences. Findings from this evaluation support the substantive/theoretical and content validity of the initial criteria.
Reliability properties meet accepted standards.
The kappa statistic measures the level of agreement between two observers that could be expected beyond chance. Using the Landis and Koch (1977) criteria, Kappa scores > 0.8 are considered to be almost perfect agreement (Kappa between 0 and 0.20 indicates slight agreement; between 0.21 and 0.40 indicates fair agreement; between 0.41 and 0.60 indicates moderate agreement; between 0.61 and 0.80 indicates substantial agreement). Two evaluations of the MMAT's reliability have been completed to date (Pace et al., 2012; Souto et al., 2014). Results from both evaluations showed that agreement between reviewers was fair to perfect on MMAT criteria. Results from Pace et al. (2012) showed that the overall quality score of appraised studies was substantial.
Method of Development
A three-step critical review was conducted.
Department of Family Medicine
These summaries are written by the NCCMT to condense and to provide an overview of the resources listed in the Registry of Methods and Tools and to give suggestions for their use in a public health context. For more information on individual methods and tools included in the review, please consult the authors/developers of the original resources.