The statistical theory of estimating and testing item response theory (IRT) models for items (questions) with discrete (correct or incorrect) responses has been thoroughly developed (recall that IRT is a mathematical model that is typically used to analyze test data). In contrast, the theory for IRT models for items with continuous responses has hardly received any attention. This omission is mainly due to the fact that, so far, the continuous response format has hardly been used by the testing industry. An exception may be the rating scale item format, where a respondent marks a position on a line to express his or her opinion about a topic. Recently, continuous responses have attracted interest as complementary information to accompany discrete item responses. One may think of the response time needed to answer an item in a computerized adaptive testing situation or of computer ratings of tasks performed in a simulated environment as continuous responses.
In the present report, an existing model for the analysis of continuous responses was extended to include a procedure for estimating the parameters in the model. Tests for evaluating the fit of the model were successfully evaluated. These tests can be used to detect problematic items and violations of assumptions of the model. The tests were also shown to have excellent control of their false positive error rate, as well as excellent ability to detect true effects.
Request the Full Report
To request the full report, please email Linda Reustle at lreustle@LSAC.org.