# Re: methodological question about difference scores

Travis Gee (tgee@metz.une.edu.au)
Wed, 31 Mar 1999 21:30:37 +1000

At 08:48 31/03/1999 +1000, you wrote:
>I would appreciate advice on calculating the difference between two grids.
>I am not familar with the relevant literature though would appreciate
>
>I am aware of two methods (though there may be other methods) to calculate
>the level of agreement between two persons who rated the same elements on
>the same constructs. These methods which do not give the same results are:
>
>(1) A difference grid (the absolute difference for each cell, summed by row
>and column)
>
>(2) Transforming ratings into a "disagreement score", e.g., a 0 if there was
>agreement, 1 if person
>used 4 and the other 1 or 2, and 2 if they were on opposite ends of the
scale.
>
>I also considered:
>
>Bannister's consistency score
>the possibility of joining the two grids and performing a cluster analysis
etc
>comparison on mean ratings for certain elements/constructs
>
>
>Is there a methodologically preferable way of calculating difference (or
>alternatively sameness).

Hi, Bob,

It rather depends on your precise question. For example, if you want to
know whether or not the two people have the same structure amongst their
personal constructs, you could obtain a KxK similarity matrix (using whatever
similarity coefficient is warranted) for K constructs. A separate
multidimensional scaling solution for each will result in two different
sets of coefficients, in D dimensions. A comparison using canonical
correlation could be done, which would give D coefficients. Since the
assumptions of cancorr are probably violated, a permutation method could
be used to obtain an estimate of where your solution stands with respect to
a permutation distribution.

For 3 or more raters (preferably more) INDSCAL weirdness coefficients would
place the various raters on a scale of sorts that would probably result in
some rather meaningful comparisons.

Another question might regard the elements, and the same sort of analysis
applied to those (dis)similarities could result in a rather different
clustering.

For direct matrix-to-matrix comparisons, Larry Hubert has a book out that
deals with the quadratic assignment procedure (QAP), but I'm not convinced
that it's better than structural comparisons, because the structural
analyses also result in some meaningful 'maps' of what is going on and *in
what way* people might differ. The reference is

Hubert, L., (1987). Assignment methods in combinatorial data analysis.
New York; Marcel Dekker, Inc.

There's another ref that may be of some use, as well:

Hubert, L. & Golledge, R.G., (1983). Rater agreement for complex
assessments. British Journal of Mathematical and Statistical Psychology,
36(2), 207-216

Hope this helps,

Travis Gee
Lecturer, School of Psychology
University of New England
Armidale NSW 2351
Australia
tgee@metz.une.edu.au
+61 (2) 6773 2410

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%