Test theory typically deals with categorical responses to test questions (items), for instance, correct/incorrect responses or responses that represent a choice from a finite number of alternatives. Whenever technically possible, it is attractive to collect information on continuous response variables that accompany these responses as a covariate. One obvious example is response time; other examples are information on cursor movement in computer-based testing, eye-tracking information, or physiological information.
In the present report, an item response theory (IRT) model (recall that IRT is a mathematical model that is typically used to analyze test data) is proposed that allows for the simultaneous analysis of categorical and continuous data within a testing situation. To make the model general, we deal with the case of multidimensional abilities as well as items that are presented in testlets (e.g., sets of items that are based on a common text passage).
A method for estimating the necessary parameters for the proposed model is presented and statistical tests of model fit are evaluated. The false positive error rates of the model fit tests were shown to be low, and decreased as sample size and test length were increased. The power of the tests to detect true model misfit decreased as the complexity of the IRT model increased.
Request the Full Report
To request the full report, please email LSACResearchReport@LSAC.org.