Discussion – Multiple Raters

 

There are many instances in which multiple raters use a single psychometric measure to evaluate one individual, such as in job performance appraisals. You may have heard of 360-reviews, which allow multiple people who work with an employee (typically peers, subordinates, and supervisors) to provide feedback on performance. It is hoped that with multiple sources of input, a more fair and complete version of an employees performance can be gained.

There are considerations that need to be addressed, however, when implementing a multiple-rater assessment. A strategy must be devised to combine the multiple evaluations. The scores may be averaged, evaluated using a rating scale, or one rater or one pair of raters may be selected as the best and those scores used. It is also necessary to examine the reliability of an assessment. Intra-class correlation and kappa are two statistics often used to measure inter-rater reliability. These tools tell you the degree to which the raters agree in their scores, and they are useful in improving assessments and rater training.

To prepare for this Discussion, consider how you might combine multiple raters evaluations of an individual on a measure of job performance. Also consider the psychometric implications of multiple-raters and how you might improve reliability of this type of assessment.

With these thoughts in mind:

Post by Day 4 an explanation of how you might combine multiple raters evaluations of an individual on a measure of job performance. Provide a specific example of this use. Then explain psychometric implications of using multiple raters. Finally, explain steps you could take to improve the reliability of a multi-rater assessment. Support your response using the Learning Resources and the current literature.