Wednesday, April 20, 2011
PCRM - Guessing
The Rasch model does not include guessing. This does not make it go away. Multiple-choice, by design, has a built in average random guessing score of one part for the printed set of answer options. Active scoring starts at 25% for 4-option questions scored by counting right marks. This scores and rewards the lowest levels of thinking. Active scoring starts at 50% for Knowledge and Judgment Scoring where higher levels of thinking are assessed and rewarded. If a student elects to mark all questions, both methods of scoring, included in PUP, yield the same score. The two methods of scoring respond the same to guessing.
Knowledge and Judgment Scoring, however, gives students the responsibility to learn and to report what they trust they know and can do. It is one form of “student-centered” instruction. This is critical in developing high quality self-correcting students, and as a result, high scoring students.
Five guessing items were found on Part 1&2 and six on Part 3&4 of the biology fall 88 test. These are items that fewer students, than the average score on the test, elected to mark, but less than that portion who marked, were right. A few students believed they knew but they did not know.
The four [unfinished] items on Part 3&4 were also among the six guessing items. Most of the Ministep “most unexpected right responses” (dark blue) occurred on these items on Part 1&2 and 3&4. Are they guessing (chance or good luck) or just marking error that also occurs among the other groups of items?
Assuming that the most unexpected responses involve carelessness, guessing, and marking error, these then play a small part in determining a student’s score. The rate tends to increase as student performance decreases. Many unexpected wrong and right answers tend to occur in pairs. One cancels the effect of the other. Only consistent analysis is required to obtain comparable results.
If I interpret the above correctly, the partial credit Rasch model (PCRM) can ignore guessing in estimating person and item measures. However, a teacher or administrator cannot ignore the active starting score of a multiple-choice test in setting cut scores. A cut score set a few points above the range of random guessing is a bogus passing standard even if the test contains “difficult” items.