A mathematical model called item response theory is often applied to high-stakes tests to estimate test-taker ability level and to determine the characteristics of test questions (i.e., items). Often, these tests contain subsets of items (testlets) grouped around a common stimulus. This grouping often leads to items within one testlet being more strongly correlated among themselves than among items from other testlets, which can result in moderate to strong testlet effects.
A series of research projects was undertaken to investigate the theoretical and practical implications of the testlet effect for high-stakes tests such as the Law School Admission Test. These projects explored areas such as the development of a testlet response model to account for the testlet effect and the development of model fit statistics to accompany the model. The model was also applied to investigate the relationship between the stimulus features and the statistics used to describe individual test items, as well as the impact of the testlet effect on the assembly of test forms.
The current paper begins by summarizing findings across the series of research projects and goes on to investigate the impact of model choice on test assembly and estimates of test-taker ability. Finally, important topics that need to be addressed by future studies are discussed.
Request the Full Report
To request the full report, please email LSACResearchReport@LSAC.org.