Part 1

Working in an established university like Universiti Kebangsaan Malaysia has its advantages. Whenever I send in the students’ OMR (Optical Mark Recognition) forms, after a few days, I would get their marks delivered by email along with a hard copy of the OMR analysis. The report is usually about 11 pages long and is usually discarded by the younger lecturers since they have no idea what it is all about.

Front page of the OMR report.

Front page of the OMR report.

To me, what is important is not only the students’ marks. Instead when I go through it, it is to make sure that the marking was done correctly by the OMR machine, and it was done using the correct answer keys.

To make sure the marking was done correctly, I would refer to the following page;

Sum up the number of correct and wrong answers.

Sum up the number of correct and wrong answers.

I would look at the number of correct answers, wrong answers and blank answers. Their sum should be the same as the total of questions in the examination papers. In this example, there were 25 correct answers and 5 wrong answers, giving a total of 30, the same as the number of questions.

As for the answer keys, I would check the questions difficulty index and discrimination index page, as illustrated below.

D is the Difficulty Index & R is the Discrimination Index

D is the Difficulty Index & R is the Discrimination Index

I would quickly identify those questions with

  • difficulty index of 30 and below or
  • negative discrimination index or less than 0.15

and immediately refer back to the original questions, to ensure whether the keys given were correct or not. This had saved me from making mistakes many times in the past. Remember that we are mere human beings therefore not free from errors. Such technology would help prevent us from making mistakes, therefore make full use of it.

The report also gives the following indexes;

  • Average Difficulty Index – this average is calculated using all the questions’ Difficulty Index
  • Average Discrimination Index – this average is calculated using all the questions’ Discrimination Index
  • Kuder and Richardson Formula 20 Reliability – Values range from 0 to 1. A high value indicates reliability, while too high a value (in excess of .90) indicates a homogeneous test.

The above values must be included in the final examination report. However some of the values calculated automatically by this OMR software are WRONG! I am rather surprised that this mistake has never been rectified, since UKM has a large number of esteemed statisticians. It is rather embarrassing, hopefully it will be rectified soon.

Only the Average Diff. & Disc. Index are correct.

Only the Total PQ, Average Diff. & Disc. Index are correct.

How to Calculate Kuder and Richardson Formula 20 Reliability Index

The formula to calculate the rho KR20 formula is as illustrated below;

Kuder and Richardson Formula 20 Reliability Index

Kuder and Richardson Formula 20 Reliability Index

  • k= number of questions
  • SUM PQ = 4.77 (p = probability correct answer, q = probability wrong answer)
  • σ2 = variance of the total scores of all the people taking the test. The answer given by the software is 337.54, which is totally wrong. The correct answer is 31.83.
Variance is 31.83, not 337.54

Variance is 31.83, not 337.54

Therefore rho KR20 = (30/29)*(1 – (4.77/31.83)) = 0.88, not 0 as in the above report. If rho = 0.88, therefore high reliability of the questions.

The mistake of the OMR software calculating the wrong value for reliability lies in calculating the wrong value for variance. The formula that the OMR software used is also rather confusing due to the missing syntax for brackets.

Wrong reliability formula used by OMR software

Wrong reliability formula used by the OMR software

Other Errors – Standard Deviation & S.E. of Measurement.

Variance is 31.83, therefore since standard deviation is the square root of variance, square root of 31.83 is 5.64. Standard deviation is 5.64, NOT 18.37 as in the OMR report.

The other mistake is the Standard Error of Measurement. Standard Error of Measurement is not the same as Standard Deviation, which is the impression that you will get from the above OMR report since their values in the report are the same.

Standard deviation “provides a sort of average of the differences of all scores from the mean.” This means that it is a measure of the dispersion of scores around the mean.

Whereas standard error of measurement is related to test reliability in that it provides an indication of the dispersion of the measurement errors when you are trying to estimate students’ true scores from their observed test scores.

The S.E.M. formula relies on the correct value for reliability. Therefore if the reliability value is wrong, your S.E.M. would be wrong too.

Standard Error of Measurement = Standard deviation * SQRT (1 – Reliability)
Standard Error of Measurement = 5.64 * SQRT (1 – 0.88) = 5.64 * 0.35 = 1.95

So it is supposed to be 1.95, not 18.37.

Conclusion

Please go through the OMR report after every examination. It will help you in maintaining the high quality of your examination. However please take note of it’s limitation and mistakes. Please be aware that the following values in the OMR report are WRONG!

  • Reliability
  • Variance
  • Standard Error of Measurement
  • Standard Deviation

Part 2 -> Calculating Difficulty Index, Discrimination Index, Reliability & SEM

Update 9th April 2015;

The OMR software has been rectified for KCKL & PPUKM. It is now reporting the reliability and standard error of measurement correctly.

Reliability & SEM correctly reported.

Reliability & SEM correctly reported.

Advertisements