CMStatistics 2023: Start Registration
View Submission - CMStatistics
B1359
Title: Considering latent evolving ability in test equating: Effects on final ranking and item parameter estimates Authors:  Carla Galluccio - University of Florence (Italy) [presenting]
Silvia Bacci - University of Florence (Italy)
Bruno Bertaccini - University of Florence (Italy)
Leonardo Grilli - University of Florence (Italy)
Carla Rampichini - University of Florence (Italy)
Abstract: In large-scale assessments, using multiple test forms to evaluate students' abilities is a common practice. However, calibrating items before the official test could be problematic due to the available items. A solution is calibrating items during the first test administration and then using the parameter estimates in subsequent evaluations. Nevertheless, this approach does not consider potential differences in population abilities, which is particularly significant when the outcome is a merit ranking. An example is provided by university entrance tests, where the baseline population could differ in average ability from the populations administered the tests later. The impact of population differences is investigated in average ability on test equalisation and merit ranking. It also explores how calibrating items on one population affects ability estimates in populations with differing average ability levels. The main findings show that, while calibrating items on populations with differences in ability does not affect the final merit ranking, it does affect the item parameter estimates.