Optimal Detection of Aberrant Answer Changes (RR 16-02)
In standardized multiple-choice testing, test takers often change their answers for various reasons. The statistical analysis of answer changes (ACs) has uncovered multiple testing irregularities on large-scale assessments and is now routinely performed at some testing organizations. This report presents two new approaches to analyzing ACs at the individual test-taker level. The information about all previous answers is used only to partition the data into two disjoint subsets: responses where an AC occurred and responses where an AC did not occur. The objective was to develop an approach that would provide optimal statistical power. The two statistics presented are based on probabilistic aspects of the difference in performance between these subsets. Answer-changing behavior was simulated, where realistic distributions of wrong-to-right, wrong-to-wrong, and right-to-wrong ACs were achieved. Results of preliminary analyses were encouraging, as these proposed statistics outperformed other widely used methods.