How to calculate for significant difference between Cohen's Kappa values?

18 views (last 30 days)
Leonard Hickman
Leonard Hickman on 6 Sep 2021
Answered: Jeff Miller on 14 Sep 2021
I have calculated the Cohen's Kappa value determining agreement between Test A and Test B, as well as Cohen's Kappa for agreement between Test A and Test C. What method would I use to calculate for a significant difference in Kappa values between agreement for A-B compard to A-C? Are there any existing scripts/functions available for this?
  2 Comments
Leonard Hickman
Leonard Hickman on 13 Sep 2021
Two separate samples, one sample that underwent tests A & B and one sample that underwent tests A & C.

Sign in to comment.

Answers (3)

Star Strider
Star Strider on 6 Sep 2021
Edited: Star Strider on 13 Sep 2021
I used Cohen’s κ many years ago. From my understanding, from reading Fliess’s book (and correspoinding with him), Cohen’s κ is normally distributed. An excellent (in my opinion) and free resource is: Interrater reliability: the kappa statistic . There are others, although not all are free.
EDIT — (13 Sep 2021 at 10:58)
To get p-values and related statistics for normally-distributed variables, the ztest function would likely be appropriate.
.

Ive J
Ive J on 12 Sep 2021
You can build confidence intervals around your Kappa values, and then see if they overlap.
  2 Comments
Ive J
Ive J on 13 Sep 2021
You may want to take a look at this thread. Then you can calculate the z-score and get a p-value out of this.
pval = 2*normcdf(zvalue, 'upper'); % two-sided test

Sign in to comment.


Jeff Miller
Jeff Miller on 14 Sep 2021
As I understand it, the fundamental question is whether tests A & B agree better than tests A & C, beyond a minor improvement that could just be due to chance (or agree worse, depending on how the tests B and C are labelled). The null hypothesis is that the agreement between A & B is equal to the agreement between A & C.
The most straightforward test for this case is the chi-square test for independence. Imagine the data summarized in a 2x2 table like this:
% Tests agree Tests disagree
% A & B group: 57 17
% A & C group: 35 8
with total N's of 74 in the first group and 43 in the second group. MATLAB's 'crosstab' command will compute that chi-square test for you. See this answer for an explanation of how to format the data and run the test.
Cohen's Kappa is a useful numerical measure of the extent of agreement, but it isn't really optimal for deciding whether the levels of agreement are different for the two pairs of tests.

Products


Release

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!