Research Reports

Detection of Aberrant Answer Changes via Kullback–Leibler Divergence (RR 14-04)

In standardized testing, test takers may change their answer choices for various reasons. The statistical analysis of answer changes (ACs) has uncovered multiple testing irregularities on large-scale assessments and is now routinely performed at some testing organizations. Research on answer-changing behavior has recently branched off in several directions, including modeling of ACs and addressing scanning errors. Data representing answer choices made prior to the final choice are impacted by such things as scanning errors in paper-and-pencil testing and the potential for a lengthy sequence of selected answer choices before the final choice in computer-based testing. These non-final answer choices are also affected by test-taker warm-up, fatigue, or certain answering strategies. Some statistics used in practice (such as the number of wrong-to-right ACs) capitalize on the inconsistencies inherent in non-final answer choices, especially at the individual test-taker level, and this may result in a high false-positive detection rate when seeking to identify aberrant test-taker behavior.

This paper presents a conservative approach to analyzing ACs at the individual test-taker level. The information about non-final answer choices is used only to partition the responses (from the final answer choices) into two disjoint subsets: responses where an AC did not occur and responses where an AC did occur. A new statistic is presented that is based on the difference in performance between these subsets. Answer-changing behavior was simulated, where realistic distributions of wrong-to-right, wrong-to-wrong, and right-to-wrong ACs were achieved. Results of these preliminary analyses were encouraging, with the new statistic outperforming two popular statistics.

Request the full report

Additional reports in this collection

researchers study paperwork and examine charts and figures on a tablet

Evidence to Support Validity Claims for Using LSAT Scores...

Law School Admission Test (LSAT) scores provide a standard measure of an applicant’s proficiency in a well-defined set of important skills associated with success in law school coursework. LSAT scores are also a strong predictor of first-year grades (FYG) and cumulative grade point average (CGPA) in law school. The most recent correlational study of LSAT results (2019) shows that LSAT scores are far superior to undergraduate grade point average (UGPA) in predicting FYG...

Understanding and Interpreting Law School Enrollment Data...

The Law School Admission Council (LSAC) has a long-standing commitment to diversity, equity, and inclusion in legal education and in the legal profession. In line with its mission to promote quality, access, and equity in legal education, LSAC is providing this report, Understanding and Interpreting Law School Enrollment Data: A Focus on Race and Ethnicity, to help law schools, admission professionals, and other legal education stakeholders understand how we are measuring who is the pipeline.