## Monday, August 17, 2009

### The Scoring Rate Quotient (SRQ)

Rasch expressed the expected reading rate in a reading test in relation to expected reading rate in a reference test as follows:

 εi = λvi/ λv1

where λvi is the reading rate of the generic or typical student v in the reading test in question, and λv1 is the reading rate of the generic or typical student v in the reference test.

That translates fine into an estimation methodology, if you have very large data set, where all students address all tests, and the tests themselves are substantial enough for averaging to happen within them. You simply average the results to get your ratio.

It doesn't work so well if you are interested in estimating the difficulty of individual test items, and especially not if you are working with data from a modern computer based test, where the items themselves are generated randomly within difficulty levels, and where the difficulty levels are set by student performance. If such a test is working properly, the difficult items will only be addressed by the more able students, and the easy items will be addressed more often by the less able students. So if the test is working as it should, the data it generates will be biased. The difficult items will appear easier than they should, because the able students who tackle them tend to have high scoring rates, and the easy items will appear harder than they should, because the less able students who tackle them tend to have low scoring rates.

An accurate estimate of item difficulty in such a test requires that student ability be taken into account, which in turn will require some iteration through the data. Suppose we begin with a crude estimate of student ability. This must be taken into account in the estimate of item difficulty, which in turn can be used to gain a better estimate of student ability. But how?

I suggest an old fashioned quotient. Record the scoring rates of all participants and calculate the mean. Then, when assessing item difficulty (or easiness), adjust the scoring rate recorded by any student against that item by the ratio of their mean scoring rate to the overall mean. You could call this ratio the Scoring Rate Quotient (SRQ). So if Student A's mean scoring rate is twice the overall mean, his SRQ is 2, and you need to adjust the scoring rate recorded by that student against any item by a factor, which reflects this quotient. But of course, because able students tend to record higher scoring rates, the appropriate factor is not 2 but 1/2, or more generally 1/SRQA.

Similarly, the item scoring rates should be laid out on a spectrum and the mean calculated. Then in the second pass at estimating student ability, the scoring rate recorded against each item should be adjusted according to the SRQ of that item. And again if Item1 has an SRQ of 2, the scoring rate of any student tackling that item should not be multiplied by 2 but 1/2 or 1/SRQ1. The scoring rate is adjusted downwards, because it was an easy item, and a high scoring rate on that item should carry less weight than that recorded on a harder item.