Crowdsourcing for assessment items to support adaptive learning

Med Teach. 2018 Aug;40(8):838-841. doi: 10.1080/0142159X.2018.1490704. Epub 2018 Aug 10.

Abstract

Purpose: Adaptive learning requires frequent and valid assessments for learners to track progress against their goals. This study determined if multiple-choice questions (MCQs) "crowdsourced" from medical learners could meet the standards of many large-scale testing programs.

Methods: Users of a medical education app (Osmosis.org, Baltimore, MD) volunteered to submit case-based MCQs. Eleven volunteers were selected to submit MCQs targeted to second year medical students. Two hundred MCQs were subjected to duplicate review by a panel of internal medicine faculty who rated each item for relevance, content accuracy, and quality of response option explanations. A sample of 121 items was pretested on clinical subject exams completed by a national sample of U.S. medical students.

Results: Seventy-eight percent of the 200 MCQs met faculty reviewer standards based on relevance, accuracy, and quality of explanations. Of the 121 pretested MCQs, 50% met acceptable statistical criteria. The most common reasons for exclusion were that the item was too easy or had a low discrimination index.

Conclusions: Crowdsourcing can efficiently yield high-quality assessment items that meet rigorous judgmental and statistical criteria. Similar models may be adopted by students and educators to augment item pools that support adaptive learning.

MeSH terms

  • Crowdsourcing
  • Education, Medical, Undergraduate / methods*
  • Educational Measurement / methods*
  • Educational Measurement / standards
  • Formative Feedback*
  • Humans
  • Learning
  • Mobile Applications
  • Students, Medical