Investigating Crowdsourcing to Generate Distractors for Multiple-Choice Assessments

09/10/2019
by   Travis Scheponik, et al.
0

We present and analyze results from a pilot study that explores how crowdsourcing can be used in the process of generating distractors (incorrect answer choices) in multiple-choice concept inventories (conceptual tests of understanding). To our knowledge, we are the first to propose and study this approach. Using Amazon Mechanical Turk, we collected approximately 180 open-ended responses to several question stems from the Cybersecurity Concept Inventory of the Cybersecurity Assessment Tools Project and from the Digital Logic Concept Inventory. We generated preliminary distractors by filtering responses, grouping similar responses, selecting the four most frequent groups, and refining a representative distractor for each of these groups. We analyzed our data in two ways. First, we compared the responses and resulting distractors with those from the aforementioned inventories. Second, we obtained feedback from Amazon Mechanical Turk on the resulting new draft test items (including distractors) from additional subjects. Challenges in using crowdsourcing include controlling the selection of subjects and filtering out responses that do not reflect genuine effort. Despite these challenges, our results suggest that crowdsourcing can be a very useful tool in generating effective distractors (attractive to subjects who do not understand the targeted concept). Our results also suggest that this method is faster, easier, and cheaper than is the traditional method of having one or more experts draft distractors, and building on talk-aloud interviews with subjects to uncover their misconceptions. Our results are significant because generating effective distractors is one of the most difficult steps in creating multiple-choice assessments.

READ FULL TEXT
research
04/10/2020

Experiences and Lessons Learned Creating and Validating Concept Inventories for Cybersecurity

We reflect on our ongoing journey in the educational Cybersecurity Asses...
research
01/26/2019

The CATS Hackathon: Creating and Refining Test Items for Cybersecurity Concept Inventories

For two days in February 2018, 17 cybersecurity educators and profession...
research
12/20/2020

Exploring Effectiveness of Inter-Microtask Qualification Tests in Crowdsourcing

Qualification tests in crowdsourcing are often used to pre-filter worker...
research
11/12/2014

Collecting Image Description Datasets using Crowdsourcing

We describe our two new datasets with images described by humans. Both t...
research
06/16/2023

Evaluating hardware differences for crowdsourcing and traditional recruiting methods

The most frequently used method to collect research data online is crowd...
research
06/15/2023

Safeguarding Crowdsourcing Surveys from ChatGPT with Prompt Injection

ChatGPT and other large language models (LLMs) have proven useful in cro...
research
01/04/2017

Probabilistic Multigraph Modeling for Improving the Quality of Crowdsourced Affective Data

We proposed a probabilistic approach to joint modeling of participants' ...

Please sign up or login with your details

Forgot password? Click here to reset