Back to Search Start Over

IRT Equating of the MCAT. MCAT Monograph.

Authors :
Hendrickson, Amy B.
Kolen, Michael J.
Publication Year :
2001

Abstract

This study compared various equating models and procedures for a sample of data from the Medical College Admission Test(MCAT), considering how item response theory (IRT) equating results compare with classical equipercentile results and how the results based on use of various IRT models, observed score versus true score, direct versus linked equating, and various test forms compare. The practical issues and potential benefits of IRT equating are discussed. Data were from 2 forms of the test and 2 administrations, for sample sizes of 8,494, 3,638, 8,147, and 4,478. Choosing between equipercentile and IRT equating impacted MCAT scores. For the Biological Sciences, the effect of using any IRT model would appear to be minimal, but none of the IRT model equivalents for Physical Science and Verbal Reasoning exactly matched those of equipercentile equating. Each IRT model diverged from equipercentile equating in Physical Sciences, with the one-parameter model most discrepant and the three-parameter model least discrepant. IRT model equivalents for the Verbal Reasoning section also diverged from the equipercentile results. Currently, random groups and common-item equating are used for the MCAT, and both of these designs were also considered in this study so that moving to IRT equating need not create design complications, although some statistical assumptions associated with IRT models must be met. Some of the issues in using IRT equating, including choosing a calibration program and IRT model, are discussed. (Contains 12 figures, 37 tables, and 20 references.) (SLD)

Details

Language :
English
Database :
ERIC
Publication Type :
Electronic Resource
Accession number :
ED462425
Document Type :
Numerical/Quantitative Data<br />Reports - Evaluative