Advice is sought regarding methods to analyse 3 raters classification
of 320 constructs (items) into 22 categories. A measure of agreement
is required to analyse;
a) agreement between raters, eg. rater 1 with rater 2
b) agreement overall between the 3 raters
c) the raliability of the classification of constructs into particular
categories.
Content analysis literature suggests, 1) an agreement coefficient
(correcting for chance) is preferable to a correlation, and 2)
coincidence matices may be useful.
Any assistance would be appreciated.
Please forward any replies to me and I will pass them on.
Thanks
Wendy Moyle w.moyle@qut.edu.au