Evaluation is the systematic assessment of a product’s success in meeting its design brief and evaluation criteria. Strong evaluation is:
- Evidence-based: Grounded in measurable data and documented feedback, not impressions
- Criterion-referenced: Each aspect of evaluation traces back to a specific criterion
- Comprehensive: Addresses function, aesthetics, safety, sustainability, ergonomics, and social/cultural factors
- Honest: Reports failures and limitations as well as successes; suggests realistic improvements
Quantitative methods produce numerical data that can be measured, compared, and analysed statistically.
Measurement testing:
- Verify dimensions against working drawing specifications (ruler, vernier caliper, micrometer)
- Load testing: apply specified load; record whether product held, deflected, or failed
- Weight measurement: compare product weight to specification
- Timing: how long does the product take to operate, assemble, or clean?
Structured rating surveys:
- End users rate attributes on a numeric scale (e.g. 1–5) or Likert scale
- Multiple users’ ratings averaged to identify consistent patterns
- Results tabulated and presented graphically
Performance testing:
- Standardised tests relevant to the product type: tensile strength, impact resistance, UV stability, moisture resistance, food safety
- Results compared against benchmark or criterion
Qualitative methods produce descriptive data about experiences, meanings, and contexts.
End user interviews:
- Semi-structured conversation exploring how users experienced the product
- Open questions: ‘What was most satisfying about using this product? What was frustrating?’
- Analysed by identifying recurring themes
Observation:
- Watch users interacting with the product in its intended context
- Note difficulties, hesitations, workarounds, and unexpected uses
Think-aloud protocol:
- User verbalises thoughts during product use
- Captures real-time cognitive response; reveals what drawings and ratings cannot
Written feedback forms (open-ended):
- Users write free responses to prompts about specific aspects
- Richer than ratings but harder to aggregate
Aesthetic critique:
- Expert or peer review of visual, tactile, and material qualities
- Structured around specific aesthetic criteria from the brief
For the student’s own product:
1. Test each evaluation criterion: record method used and result obtained
2. Triangulate: use both quantitative and qualitative methods for the same criterion where possible
3. Gather feedback from at least one representative end user
4. Assess the product against the design brief holistically (not just criteria one by one)
5. Identify the most significant successes and the most significant failures
6. Propose specific, evidence-based improvements for each failure
The same framework applies to analysing existing products:
- Apply evaluation criteria derived from the product’s apparent design brief
- Use design factors (function, aesthetics, materials, safety, ergonomics, sustainability) as the evaluation structure
- Base judgments on evidence (measurements, published specifications, user reviews) not just opinion
KEY TAKEAWAY: Strong evaluation uses both quantitative (measured, numerical) and qualitative (descriptive, experiential) evidence. Evaluation that relies on opinion alone is not evidence-based.
EXAM TIP: When asked to describe how you would evaluate a product, specify the method, what data it produces (quantitative or qualitative), which criterion it tests, and how you would use the data to make a judgment.