Advice is sought regrding methods to analyse 3 raters classification of
320 constructs (items) into 22 categories. A measure of agreement is
required to analyse,
a. agreement between raters eg. rater 1 with rater 2
b. agreement overall between the 3 raters
c. the reliability of the classification of constructs into particular
categories
Content analysis literature suggests 1) an agreement coefficent
(correcting for chance) is preferable to a correlation, and 2)
coincidence matices may be useful.
Any assistance would be appreciated.
If you return any replies to me I will pass them on to him.
Hoping someone can help.
Wendy Moyle w.moyle@qut.edu.au