LSAC Resources

 Research

Research Reports

Structural Modeling Using Two-Step MML Procedures (RR 08-07)

by Cees A. W. Glas, University of Twente, Enschede, The Netherlands

Executive Summary

In a computerized adaptive test (CAT), test questions (items) are selected for administration to a test taker based on their performance on previous items, with the intent of tailoring the difficulty level of the test to the ability level of the test taker. Data from CATs are special in the sense that every student responds to a virtually unique set of items. Therefore, number-correct scores of different test takers can no longer be compared, and statistical methods traditionally used in the analysis of number-correct scores lose their relevance. The problem can be solved by applying these traditional methods in the analysis of proficiency scores typically produced by the application of an item response theory (IRT) model. IRT is a mathematical model used in the analysis of test data that results in comparable proficiency scores for each test taker even though the test takers may have responded to different items in the test administration.

In this paper, we consider two methods for estimating test-taker proficiencies with traditional statistical methodology. We then compare these proficiency estimates with those obtained using an IRT model. We base our analyses on a dataset from school effectiveness research. The proficiency estimates were close to those obtained with the IRT model. The computing time needed for the traditional statistical methods was about 25% of the time needed for the IRT method.

Bookmark and Share

Was this page helpful? Yes No

Why not? (Provide additional feedback below. NOTE: If you have a question or concern regarding your specific circumstances, please go to the Contact Us page.)



E-mail address:

No Thanks

Please enter a comment.

Thank you for your feedback.

Get Adobe Reader to view PDFs indicated on this site by (PDF)