In Chapter two of his book, Rasch jumps from his Equation 6.1 to an approximation, which he attributes to Poisson, but he not only provides no derivation of the approximation, but also omits to write out the approximation in general terms.

I have found a page on the web, which sets out a cursory but satisfactory explanation of the approximation. Rasch calls the approximation Poisson's Law. The page on the web calls it the Poisson distribution or the Law of Rare Events.

Let's begin with my Equation 3 from my previous blog.

p{a|n} | = | (n!/((n-a)!a!))θ^{a}(1-θ)^{n-a} | (3) |

Then according to the argument, if you focus on the right hand term, (1-θ)^{n-a}, you can approximate that as:

(1-θ)^{n-a} | ≈ | e^{-n}^{θ} | (4) |

where ≈ means is approximately equal to, θ is assumed to be low. The derivation involves isolating the left hand of the equation and taking the natural log:

ln(1-θ)^{n-a} | = | (n-a)ln(1-θ) | (5) |

It then uses another "approximation", which is not proven or explained:

ln(1-θ) | ≈ | -θ, where θ<<1 | (6) |

I have no idea why this works, but I checked it for a few values for θ as shown in the array below:

θ | 0.050 | 0.040 | 0.030 | 0.020 | 0.010 |

1-θ | 0.950 | 0.960 | 0.970 | 0.980 | 0.990 |

ln(1-θ) | -0.051 | 0.041 | -0.030 | -0.020 | -0.010 |

The derivation then uses another approximation:

(n-a)(-θ) | ≈ | -nθ, where θ << 1 | (7) |

Let's have a closer look at this.

(n-a)(-θ) | ≈ | -nθ + aθ |

So they are saying aθ disappears if θ is very small, which strikes me as a bit bold, but if you plug approximations 6 and 7 into expression 5, you get::

ln(1-θ)^{n-a} | ≈ | -nθ, where θ << 1 | (8) |

Taking the exponent of both sides takes you back to approximation 4.

Next, if you focus on the term n!/((n-a)!, according to the argument on the web,

n!/((n-a)! | ≈ | n^{a}, where a << n | (9) |

I won't copy out the derivation of this one, but if you substitute 8 and 9 back into a slight rearrangement of expression 3 you get:

a!p{a|n} | ≈ | n^{a}θ^{a}e^{-n}^{θ} |

p{a|n} | ≈ | nθ^{a}e^{-n}^{θ}/a! | (10) |

This can be further simplified if define nθ as λ, the expected frequency of events:

p{a|n} | ≈ | λ^{a}e^{-}^{λ}/a! | (11) |

When I first read this chapter, I saw that an approximation was being claimed, and it just increased my head ache. Now that I have looked at a derivation of the approximation, I can emphasise the assumptions being made, and make an observation.

The assumption of approximation 8 is that the probability of an individual event is very low, and the assumption of approximation 9 is that the observed frequency of the events is low. In theory, one implies the other, and when applied to a reading test, where the expected frequency of errors is low, the theory may translate well into practice.

But according to conventional psychometric theory (and I'm sorry I don't have references to hand), best results are achieved in a dichotomous test when the probability of success on individual items covers the mid range. This needs to be born in mind as I proceed through the Rasch argument.