Re: Using grids to assess changes in parent perceptions

Devi Jankowicz (
Fri, 21 May 1999 21:44:23 +0000

Tony Downing writes,

>Devi Jancowicz's prompt reply to Suzanne was very welcome for me, too, as I
>had been wondering about doing exactly what he suggested for assesing
>change before and after an intervention, i.e., for those constructs that
>re-emerge with the same labels on the second occasion, do a matrix
>subtraction of the ratings for Time 2 from the ratings for Time 1. I'm
>still not quite gettng my head round the status of the resulting numbers,
>though. If we process the grid in the usual way, doing either hierarchical
>cluster analysis or principal components analysis, what will it mean? I
>guess that any clusters that emerged, in constructs or elements, would show
>up constructs or elements that had tended to change together.... (The
>trailing dots denote a lingering uncertainty that has not quite left me!)

Yes, I think I see what you mean. Let's try and creep up on it gradually.

In a difference grid, the data are relatively small values with respect
to your original before, and after, values: they're _differences_ between
those values-- which themselves are probably on a 5 or 7-point rating
scale-- so most of the difference values on repeated constructs are
likely to be 1, 2 or 3. So, do you _want_ to do a cluster analysis on
these data? Lots of elements/ constructs are going to be clustering
together (a fair number of tied values), giving fairly "shallow"
dendrograms, one for elements and one for constructs. All a bit muddy,

Assuming you do, how to interpret the result? Well, think of a
cluster-analysis as a two-step process.
1. You firstly work out the extent to which the ratings of elements are
similar (compare each column with each other column of a grid and
calculate the sum-of-differences down the column). Now imagine taking a
pair of scissors and cutting the grid up into columns, which you then
shuffle around until the columns with the most similar ratings
(sums-of-differences) lie side by side.
2. Then do the same for constructs (remembering that constructs are
bipolar, so you have to calculate the sums-of-differences scores twice
over, reversing the directionality of one of each pair of constructs
you're comparing, and choosing the directionality which gives the lowest
sum-of-differences. Now imagine yourself taking a pair of scissors and
cutting along the rows of the grid, and rehuffling the rows until the
constructs with the most similar ratings (sums of differences) lie side
by side.
That's the heart of a cluster analysis: the remainder is just a set of
conventions on how you represent groupings of adjacent elements /
constructs as a logical chain of nearest-neighbours, which helps you
create the major and minor branches, as it were, of your dendrogram or
(Jankowicz and Thomas, 1982/3 is a good way of understanding cluster
analysis: it's a description of how to do one by steam-driven,
non-computerised, pencil-and-paper means, would you believe. A 10 x 10
grid takes about an hour, but by golly you'll get a feel for what's
involved in cluster analysis if you try it out for yourself!)

Conclusion: And so, when your ratings are difference values, your element
clusters will group together numbers which represent elements which have
changed in varying degrees, (with a cluster of strongly-changed elements,
another of little-changed elements, etc.); ditto for constructs.

_But_ this all depends on sums-of-differences: in the case of elements,
across all the constructs; and in the case of constructs, across all the

My own feeling is that, while the result in "Conclusion" above sounds
attractive, very often when you're looking at differences, you're more
interested in questions like "which particular elements have changed, on
which particular constructs?" You are, in effect, _weighting_ the
significance of some constructs over others in your search for what the
changes mean. But, because a cluster analysis works in terms of _sums of
differences_, the resulting dendrograms mightn't be all that informative.

Another reason, I suppose, for not diving into cluster analysis with
change grids? I dunno. Mildred Shaw knows a lot about this sort of thing:
are you there, Mildred? Also, Richard Bell should have something
interesting to say on all this. (Richard and Mildred take over when my
own knowledge of statistical analysis goes phut, as it were.)

To turn to your particular application: I don't know a lot about measures
of complexity, so I won't hazard any comments on those. But look: if
you're really interested in the extent to which parents of kids with
communication difficulties do or don't benefit from the Hanen programme,
why not forget about the numbers for a while and just look at the
_content_ of their constructs before and after? Derive a set of
categories which reliably classifies the ways in which they construe
_before_ the programme, and apply these categories to the constructs
_after_ the programme. Now, what _are_ the categories? What are people
actually saying?

Next, okay, does the number of constructs in each category differ, and do
you need new categories to accomodate the "after" constructs? What are
they _saying_ that's different? How has the emphasis in the whole
repertoire changed? Your search for "common themes" among parents, as you
describe it, makes this a procedure just as likely to be useful as a
blind bit of number-crunching with cluster-analysed change grids looking
for structural changes-- something else altogether.

The nice thing about content analysis is that, so long as you've used the
same set of elements for each interviewee, you can aggregate constructs
into a sample in which the constructs, rather than the interviewees, are
your unit of analysis: no need to worry about
n = 1 case studies. Peter Honey's technique is particularly powerful (as
I must be boring everyone to death, now, by reiterating!)

Sorry to go on. In conclusion:
>I've no doubt that finding the right kinds of
>elements is the first major objective in designing this study, but I don't
>yet feel sure that we have quite got the right kinds of elements to show up
>the construing changes that we have postulated. W would be enormously
>grateful for comments and criticisms.

You know how Mary Tudor said "When I die, you will find the word 'Calais'
written on my heart"? Well, when I die, you will find "I used the wrong
elements, dammit!" engraved on mine. Yes, it really is all about finding
the most useful set of elements.

Now, on first blush, your elements sound fairly complicated to me. If
they come from some pre-existing analytic framework you're using-- you
know, if some published work suggests that these are the "child"ses at
stake when you're working with parents whose kids have severe
communication difficulties, then fine.

If you don't have such a pre-existing framework, maybe a grid which
compares other children without communication difficulties, with the
particular child now with his/her difficulties, and the same child were
the difficulties to be lessened? Sheer speculation on my part, as I know
nothing about the field you're researching.

But one handy way of checking whether a set of elements is usable is to
try them out on yourself: can you get your head round a triad composed of
the elements you're planning to use, sufficient to identify constructs
easily? Something else which helps is to use an appropriate eliciting
phrase which reminds your interviewee of the purpose of the grid, along
the following lines: "in what way are two of these the same, and one
different, in terms of XXX?" wheer XXX is a phrase that focuses the
interviewee's attention on the exact situation, circumstances, or
behaviour you're interested in.

I must stop, this is getting more and more rambly.

Kind regards,

Devi Jankowicz

Jankowicz A.D. & Thomas L.F. (1982/3) "The FOCUS cluster-analysis
algorithm in human resource development" _Personnel Review_ 11, 15-22 and
erratum, 1993, 12, 1, 22 (you need the latter too: the journal misprinted
a bit of the computational procedure in the first of the two issues).