A variety of ways has evolved to report standardized test results with the “Lake Wobegon” goal of all schools being above average or improving each year: percent right score, ranked score, scaled score, standardized scaled score, and percent improvement.
The percent right score is the number of right marks divided by the number of questions on the test. It cannot be manipulated, however, Ministep calculates the number of right marks divided by the total number of marks when estimating student ability and item difficulty measures.
Ranked scores are obtained by sorting right mark scores.
Scaled scores are positive and arbitrary. In general, the central scaled score corresponds to zero logits. Scaling involves a multiplier and a constant. A logit scale running from -5 to +5 can be scaled by multiplying by 10 (-50 to + 50) and then adding a constant of 50 (0 to 100). The multiplier spreads out the scale. The constant sets the value related to zero logits. Winsteps Table 20.1 prints a logit scale (default) and a percent (0 to 100 point) scale. Table 20.2 prints a 500 point scale.
Standardized scaled scores go one step farther in adjusting the result to fit a predefined range. The central scaled score may not correspond to zero logits. The scaling multiplier and constant must be published to obtain the original logit scale and the original raw scores.
Percent improvement compares scaled scores from one year to the next. When the scaled score and the percent right scores are not published, there is no way to know the basic test results.
Each “improvement” in reporting test results, over the past few years, makes the output from Winsteps less transparent. The Rasch model and traditional CTT analyses provide data for these methods of reporting but are not responsible for the end uses. Most disturbing is a recent report that, in general, there is no way to audit the judgments made in developing annual reported school values.
No comments:
Post a Comment