Kappa (may not be combined with "by")

Kappa measures agreement of raters. Actually, there are several situations in which interrater agreement can be measured, e.g., one or many raters, always the same raters or interchangeable raters, etc.

This entry deals only with the simplest case, two unique raters. Note that kap is not an abbreviation; it is the very command for this case. There is a kappa command, but its meaning is different.

kap var1 var2, tab

will compute an unweighted kappa coefficient. Option tab produces a table of the ratings.

kap var1 var2, wgt(w)

will weight disagreements in a linear way. For instance, if there are four categories, cases in adjacent categories will be weighted by factor 0.667, those with a distance of two categories will have weight 0.333.

kap var1 var2, wgt(w2)

will use "squared" weights, which makes categories even more alike.

Users may provide their own weights; see the handbook or the help system for the details.

© W. Ludwig-Mayerhofer, Stata Guide | Last update: 19 Jul 2012