A standardized rubric to evaluate student presentations

Am J Pharm Educ. 2010 Nov 10;74(9):171. doi: 10.5688/aj7409171.

Abstract

Objective: To design, implement, and assess a rubric to evaluate student presentations in a capstone doctor of pharmacy (PharmD) course.

Design: A 20-item rubric was designed and used to evaluate student presentations in a capstone fourth-year course in 2007-2008, and then revised and expanded to 25 items and used to evaluate student presentations for the same course in 2008-2009. Two faculty members evaluated each presentation.

Assessment: The Many-Facets Rasch Model (MFRM) was used to determine the rubric's reliability, quantify the contribution of evaluator harshness/leniency in scoring, and assess grading validity by comparing the current grading method with a criterion-referenced grading scheme. In 2007-2008, rubric reliability was 0.98, with a separation of 7.1 and 4 rating scale categories. In 2008-2009, MFRM analysis suggested 2 of 98 grades be adjusted to eliminate evaluator leniency, while a further criterion-referenced MFRM analysis suggested 10 of 98 grades should be adjusted.

Conclusion: The evaluation rubric was reliable and evaluator leniency appeared minimal. However, a criterion-referenced re-analysis suggested a need for further revisions to the rubric and evaluation process.

Keywords: assessment; criterion-referenced grading; evaluation; rating scale; reliability; rubric.

Publication types

  • Validation Study

MeSH terms

  • Education, Pharmacy / methods*
  • Educational Measurement
  • Humans
  • Models, Statistical*
  • Reproducibility of Results
  • Students, Pharmacy*