In terms of explained variance and fit of the model to the
ratings which were made, the 2 dimensional is clearly
interpretatable and an adequate "representation". However, I have
chosen to report both models, as a third dimension is of
importance to some participants. Interestingly, accepting a third
dimension improves the measures of fit for people whom the 2d
model didn't fit as well, while reducing the goodness of fit for
participants whom the 2d model fit very well. This process of
model fitting was assumed to explain the shift in 2 cases in the
3 dimensional plot.
In terms of making this decision, I examined principal component
analyses of the grids (SPSS was used as loadings are
standardized)of the participant who had the highest weight on
each dimension. I was interested to see if there were differences
in the types of constructs which loaded onto factors, (Travis'
comments prompted me to try this). Also,using the content
analysis I computed mean ratings for each case study, on the most
commonly used constructs, e.g all ratings made on constructs
coded as pertaining to insight, were averaged for each case. To
facilitate examination of these average ratings graphs were
developed for each set of mean ratings (the idea of plotting
grids as graphs was discussed by Liseth et al. in the IJofPCP
v.6,3 1993) and compared to the dimensional interpretation. If
the model separated certain cases they should have similar mean
ratings, different to the ratings made on the other cases
separated by the dimensional axis.
There are two major limitations to this approach: 1. it is highly
dependent on the 'accuracy' of the content analysis of construct
labels and 2. not all sets of mean ratings were constituted by
the same number of constructs/participants. In my case such an
approach can only be suggestive and was used to confirm
interpretations which were made. This approach would be even
more useful when participants all used the same constructs,
though this is not a guarantee the constructs mean the same thing
to all who use them.
I ditched the idea of using regression based on subject weights
because: although I might consider a dimension important I can't
assume this is associated with either high or low ratings on some
construct, rather the issue is that this dimension is important
to me. The final strategy was to seek validation from other
clinicans. I'm still working on this. As usual any comments are
welcomed,
Bob Green.