The quality of any research finding depends entirely on the methods used to produce it and the quality of evidence collected. This KK focuses on how to evaluate methods and evidence in relation to a specific research question — not in the abstract, but in the context of what the investigation is trying to answer.
Methods and evidence cannot be evaluated in isolation. A survey with a 50-person sample might be strong evidence for an exploratory qualitative question and completely inadequate evidence for a claim about population-wide trends. Always evaluate methods and evidence by asking: “Does this method, and this evidence, adequately address this specific research question?”
KEY TAKEAWAY: The evaluative standard is always the research question. A methodology is “appropriate” or “strong” only relative to what the investigation is trying to find out. Build this relational analysis into every methodological evaluation you write.
| Question Type | Appropriate Methods | Inappropriate Methods |
|---|---|---|
| “How prevalent is X?” | Large-scale survey | Single interview |
| “Why does X happen?” | Interviews, focus groups, case studies | Survey with closed questions only |
| “Does X cause Y?” | Controlled experiment | Correlational survey alone |
| “What is the lived experience of X?” | In-depth interviews, ethnography | Quantitative survey |
| “How has X changed over time?” | Longitudinal study, document analysis | Single cross-sectional survey |
Does the method actually measure what it claims to measure in this context?
- Are the instruments validated for this population?
- Are key concepts operationalised consistently with the question’s intent?
- Are confounding variables controlled or accounted for?
Can the findings be generalised to the target population the research question implies?
- Is the sample representative of that population?
- Were the conditions of data collection realistic?
Would the same method produce consistent results if repeated?
- Are procedures standardised?
- Are instruments reliable (internal consistency, test-retest reliability)?
- Is inter-rater reliability established for qualitative coding?
EXAM TIP: A three-part answer to “evaluate the method in relation to the research question” should cover: (1) whether the method type is appropriate, (2) a specific strength in how it was implemented, and (3) a specific limitation that affects the conclusions drawn.
After evaluating method design, evaluate the quality of the evidence produced:
Even high-quality evidence may not support a particular conclusion if:
- The evidence is from a different population than the conclusion claims
- The timeframe of the evidence does not match the timeframe of the claim
- The operational definition used differs from the concept in the conclusion
- The effect observed is too small to be practically significant for the claim made
This is underdetermination — the evidence underdetermines the conclusion, meaning the conclusion goes beyond what the evidence can establish.
In your written report’s evaluation section, apply all of the above to your own investigation:
- Were your methods appropriate for your specific research question?
- What are the key limitations of your evidence?
- What would stronger evidence look like?
- What alternative explanations remain after considering your evidence?
Assessors look for honest, specific self-evaluation — not vague disclaimers (“the sample was small”) but substantive analysis of how limitations affect the reliability, validity and generalisability of your conclusions.
APPLICATION: Write a methods evaluation section for your own investigation before you write the results section. This forces you to identify limitations while they can still be addressed, and it produces the reflection required for a high-scoring written report.
COMMON MISTAKE: Confusing methodological limitations with failures. Having a small sample does not mean your research is worthless — it means your conclusions must be appropriately qualified. The error is not having a small sample; the error is claiming population-level conclusions from a small-sample study.