| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • Introducing Dokkio, a new service from the creators of PBworks. Find and manage the files you've stored in Dropbox, Google Drive, Gmail, Slack, and more. Try it for free today.

View
 

Appraise the quality of studies

Page history last edited by Quan Nha HONG 3 years, 3 months ago

Hundreds of critical appraisal tools can be found in the literature. These tools have mainly been developed for one specific study design (e.g., randomized controlled trial) or for one category of study design (e.g., qualitative research). Currently, two options are available to researchers conducting mixed studies reviews: (1) use different tools for the study designs included in their reviews and (2) use one tool that include criteria covering several study designs. Still few tools have been developed for concomitantly assessing studies included in mixed studies reviews, i.e., quantitative, qualitative, and mixed methods studies. In the following, we present some tools that can be used in mixed studies reviews.

 

Their choice will depend on several criteria such as:

  • Study design coverage: it is preferable to choose a tool that has items covering all these designs (qualitative, quantitative and mixed method studies) if the review include all three types.   
  • Type of criteria: two main type of criteria can be identified: reporting and methodological. The quality of reporting is often used as a proxy measure for methodological quality. It was however found that a good research may be reported badly (Huwiler-Müntener et al, 2002; Mhaskar et al, 2012). Hence, it is important to make a clear distinction between these two quality domains. 
  • Measurement properties: it is important to check the measurement properties of the tools to ensure that they serve their intended purpose (appropriateness), that they were developed in accordance with guidelines (clear origin of the criteria), that they can be used in different circumstances and by different persons (reliability), and that they correctly measure what they intend to measure (validity). 
  • Time to complete: time needed to appraise a study.
  • User manual: existence of guidance on how to use the tool.

 

1) Mixed Methods Appraisal Tool (MMAT) (Pace et al, 2012; Pluye et al, 2009)

 

The Mixed Methods Appraisal Tool (MMAT) was developed at McGill University (Canada). Its development is based on a mixed methods theory, and a review of appraisal tools for quantitative and qualitative studies, e.g., tools used in systematic mixed studies reviews (Pluye et al, 2009). It contains 19 methodological quality criteria for appraising quantitative, qualitative and mixed methods studies.  These criteria are scored on a nominal scale (Yes/No/Can't tell) and allow for the assessment of five main type of studies: (1) qualitative studies; (2) randomized controlled trial; (3) non-randomized quantitative studies; (4) quantitative descriptive studies; and (5) mixed methods research.

 

Criteria:

  • Study design coverage: qualitative, quantitative, and mixed method studies.   
  • Type of criteria: methodological criteria.
  • Measurement properties: Clear origin and content validation of criteria. An inter-rater reliability of .717 was found for global score (Pace et al, 2012). 
  • Time to complete: 14 minutes (Pace et al, 2012).
  • User manual: yes.

 

Click here to download this tool

 

2) Evaluation tool for mixed methods study designs (Long, 2005; Long et al, 2002; Long and Godfrey, 2004)

 

This tool was developed at the Nuffield Institute for Health (United Kingdom). Its development was based on established methodological checklists for quantitative research (Long et al, 2002) and on the literature and epistemology of qualitative research (Long and Godfrey, 2004).  This tool proposes 50 reporting criteria related with seven review areas: Study evaluative overview, Study and context (setting, sample and outcome measurement), Ethics, Group comparability, Qualitative data collection and analysis, Policy and practice implications, Other comments. The authors mentioned that their tool should be used as a template to aid for the formation of a judgement on the quality of a paper and should not be seen as methodological questions to ask (Long and Godfrey, 2004). 

 

Criteria:

  • Study design coverage: qualitative and quantitative studies.   
  • Type of criteria: reporting criteria.
  • Measurement properties: no study of the measurement properties of this tool was found.
  • Time to complete: not mentioned.
  • User manual: no manual but examples are given within the tool.

 

Click here to download this tool

 

3) Quality Assessment Tool (QATSDD) (Sirriyeh et al, 2012)

 

The Quality Assessment tool (QATSDD) was developed at the University of Leeds (United Kingdom).  It contains 16 reporting criteria scored on a scale from 0 to 3 (Not at all/Very slightly/Moderately/Complete). These criteria apply to quantitative and qualitative studies. 

 

Criteria:

  • Study design coverage: qualitative and quantitative studies.   
  • Type of criteria: reporting and methodological quality criteria.
  • Measurement properties: Test-retest and inter-rater reliability were assessed and ranged from good to substantial (kappa ranging from .698 to .901).  Content validation with 9 health care researchers.  
  • Time to complete: not mentioned.
  • User manual: guidances notes are provided (Table 1 in Sirriyeh et al, 2012).

 

Click here to view the abstract of the article

 

4) Crowe Critical Appraisal Tool (CCAT) (Crowe and Sheppard, 2011; Crowe et al, 2011; 2012)

 

The Crowe Critical Appraisal Tool was developed at the University of Townsville (Australia).  It contains 54 reporting items in eight categories: Preamble, Introduction, Design, Sampling, Data collection, Ethical matters, Results, and Discussion. The items are rated on a nominal scale (Present/Absent/Not applicable). These items apply to quantitative and qualitative studies. 

 

Criteria:

  • Study design coverage: qualitative and quantitative studies.   
  • Type of criteria: reporting and methodological criteria.
  • Measurement properties: Clear origin of criteria and construct validation (Crowe and Sheppard, 2011). An inter-rater reliability of .74 was found for total score (Crowe et al, 2012).
  • Time to complete: not mentioned.
  • User manual: yes.

 

Click here to download this tool

 

 

REFERENCES

  • Crowe, M,  Sheppard, L. (2011). A general critical appraisal tool: an evaluation of construct validity. International Journal of Nursing Studies, 48(12), 1505-1516. 
  • Crowe, M, Sheppard, L,  Campbell, A. (2011). Comparison of the effects of using the Crowe Critical Appraisal Tool versus informal appraisal in assessing health research: a randomised trial. International Journal of Evidence-based Healthcare, 9(4), 444-449. 
  • Crowe, M, Sheppard, L,  Campbell, A. (2012). Reliability analysis for a proposed critical appraisal tool demonstrated value for diverse research designs. Journal of Clinical Epidemiology, 65(4), 375-383. 
  • Huwiler-Müntener, K., Jüni, P., Junker, C. and Egger, M. (2002). Quality of reporting of randomized trials as a measure of methodologic quality. JAMA, 287(21), 2801-2804.
  • Long AF (2005). Evaluative tool for mixed method studies. Leeds, UK: Schools of Healthcare, University of Leeds.
  • Long A,  Godfrey M. (2004). An evaluation tool to assess the quality of qualitative research studies. International Journal of Social Research Methodology, 7(2), 181-196.
  • Long AF, Godfrey M, Randall T, Brettle AJ,  Grant MJ. (2002). Developing evidence based social care policy and practice. Part 3: Feasibility of undertaking systematic reviews in social care. Leeds: Nuffield Institute for Health.
  • Mhaskar, R., Djulbegovic, B., Magazin, A., Soares, H.P. and Kumar, A. (2012). Published methodological quality of randomized controlled trials does not reflect the actual quality assessed in protocols. Journal of Clinical Epidemiology, 65(6), 602-609.
  • Pace R, Pluye P, Bartlett G, Macaulay AC, Salsberg J, Jagosh J, et al. (2012). Testing the reliability and efficiency of the pilot Mixed Methods Appraisal Tool (MMAT) for systematic mixed studies review. International Journal of Nursing Studies, 49(1), 47-53. 
  • Pluye P, Gagnon MP, Griffiths, F, Johnson-Lafleur, J (2009). A scoring system for appraising mixed methods research, and concomitantly appraising qualitative, quantitative and mixed methods primary studies in Mixed Studies Reviews. International Journal of Nursing Studies, 46(4), 529-546.
  • Sirriyeh R, Lawton R, Gardner P, Armitage G (2012). Reviewing studies with diverse designs: the development and evaluation of a new tool. Journal of Evaluation in Clinical Practice, 18(4), 746-752.

Comments (0)

You don't have permission to comment on this page.