Guidelines for Critiquing Research Design in a Quantitative Study:
Was the design experimental, quasi-experimental, or nonexperimental? What specific design was used? Was this a cause-probing study? Given the type of question (Therapy, Prognosis, etc.), was the most rigorous possible design used?
What type of comparison was called for in the research design? Was the comparison strategy effective in illuminating key relationships?
If the study involved an intervention, were the intervention and control conditions adequately described? Was blinding used, and if so, who was blinded? If not, is there a good rationale for failure to use blinding? Were data collected in a manner that minimized bias? Were the staff who collected data appropriately trained?
If the study was nonexperimental, why did the researcher opt not to intervene? If the study was cause-probing, which criteria for inferring causality were potentially compromised? Was a retrospective or prospective design used, and was such a design appropriate?
Was the study longitudinal or cross-sectional? Was the number and timing of data collection points appropriate?
What did the researcher do to control confounding participant characteristics, and were the procedures effective? What are the threats to the study’s internal validity? Did the design enable the researcher to draw causal inferences about the relationship between the independent variable and the outcome?
What are the major limitations of the design used? Were these limitations acknowledged by the researcher and taken into account in interpreting results? What can be said about the study’s external validity?
Were key variables operationalized using the best possible method (e.g., interviews, observations, and so on)? Are the specific instruments adequately described, and were they good choices, given the study purpose and study population?
The review needs to be in a narrative format.