LOGIN

Please make a selection.
Create New Account Submit
Please make a selection.

LSAC Resources

Research Reports

Comparability of Test-Taker Proficiency Scores on Computer Adaptive Tests Using Real and Simulated Data (RR 11-04)

by Josiah Evans, Law School Admission Council; Jimmy de la Torre, Rutgers University; and Peter J. Pashley, Law School Admission Council

Executive Summary

The simulation of correct/incorrect responses to test questions (items) is a commonly used analytical technique with many potential benefits, but it is unclear to what extent these artificially generated datasets compare to real-world test-taker item responses. This study compares real, large-scale test-taker item response data to simulated data to determine the extent to which simulated data are an accurate representation of real-world testing outcomes. Using an original method, this study matched item response data from multiple administrations of the Law School Admission Test (LSAT) to create a single dataset of 534 items and 5,000 synthetic test takers. Numerous IRT-based comparisons between real and simulated data were made by three test-length conditions. Results indicate that simulated data are largely similar to real test data, with minor caveats for shorter-length tests.

Was this page helpful? Yes No

Why not? (Provide additional feedback below. NOTE: If you have a question or concern regarding your specific circumstances, please go to the Contact Us page.)

Please enter a comment.

Thank you for your feedback.

Why not? (Provide additional feedback below. NOTE: If you have a question or concern regarding your specific circumstances, please go to the Contact Us page.)

This is the page where the feedback was left.
Please do not enter data into this text box. This is test to reject non-human form submissions.

Get Adobe Reader to view PDFs indicated on this site by (PDF)