Sélectionner une page

On the surface, these data appear to be available for analysis using methods for 2 × 2 tables (if the variable is classified) or correlation (if numerical) that we have previously explained in this series. [1.2] However, further examination would show that this is not true. In these two methods, the two measures relate to different variables for each individual (for example. B exposure and result, height and weight, etc.) whereas, in the `agreement studies`, the two measures refer to the same variable (for example). B, breast x-rays, measured by two radiologists or hemoglobin using two methods). Once the kappa is calculated, the researcher will probably want to assess the importance of the kappa received by calculating the confidence intervals for the received kappa. Percentage agreement statistics are a direct measure, not an estimate. There is therefore only a small need for confidence intervals. The Kappa is, however, an estimate of Interrater`s reliability and the confidence intervals are therefore more interesting. This is calculated by ignoring that pe is estimated from the data and treating in as an estimated probability of binomial distribution, while asymptomatic normality is used (i.e.

assuming that the number of items is large and that this in is not close to 0 or 1). S E – Display style SE_ -kappa (and CI in general) can also be enjoyed with bootstrap methods. In the edition below, we can see that the « Simple Kappa » indicates the estimated kappa value of 0.389 with its standard asymptomatic error (ASE) of 0.0598. The difference between the observed agreement and the expected independence is about 40% of the maximum possible difference. Based on the reported 95% confidence interval, the value is between 0.27 and 0.51, suggesting only a moderate agreement between Siskel and Ebert. The kappa is a form of correlation coefficient. Correlation coefficients cannot be interpreted directly, but a square correlation coefficient, called the Determination Coefficient (COD), is directly interpretable. The COD is explained as the amount of variation in the dependent variable that can be explained by the independent variable. Whereas the true COD is calculated only on the pearson r, an estimate of the variance taken into account for each correlation statistic can be obtained by squaring the correlated value. The squaring of the Kappa is conceptually translated into accuracy (i.e. the reversal of the error) in the data because of the congruence between the data collectors. Figure 2 shows an estimate of the amount of correct and false data in research data sets based on the degree of congruence, as measured by the percentage agreement or kappa.

There are a number of statistics that have been used to measure the reliability of interreters and intraraterns. A sub-list includes a match percentage, Kappa cohens (for two tyters), kappa fleiss (Adjustment of Cohens Kappa for 3 or more raters), contingency coefficient, Pearson r and Spearman Rho, intraclassin correlation coefficient, match correlation coefficient, and Alpha krippendorff (useful if there are several tips and evaluations).