Praise for RepGrid2 and a question about its cluster analysis

Tony Downing (
Fri, 25 Jun 1999 01:23:53 +0100

1) How great it is using RepGrid2 on a laptop to elicit grids in the field:

We are using rep grids to try to evaluate changes in construing, by parents
of children with severe communication difficulties, of their children's
behaviour and communication. We are engaging the parents in rep. grid
interviews at the start of an intervention that is designed to increase
their sensitivity to their children's communication, and we will repeat the
process after the end of the intervention, at Time 2. A few weeks ago, I
worried in public, on this list, about the possible snag that the rep grid
procedure itself might be therapeutic - which would be fine except that it
might prevent us being able to interpret any increases in sensitivity shown
by the grids at Time 2 as being due to the intervention rather than to the
grid procedure itself. A particular dilemma arose in needing to check out
the interpretation of each grid with each participant, without virtually
telling them in advance (at Time 1, the start of the intervention) how we
wanted them to respond at Time 2.

I'm grateful for all the helpful comments received in response to my
queries - some helpful technically, and others stimulatingly challenging to
all the presuppositions in the design, procedure and presuppositions
implicit in what I've outlined above. I thought people might now like to
hear of the compromise procedure that we have adopted, with the aid of a
portable laptop Macintosh computer running RepGrid2, and of how well-suited
this software seems to be to this kind of application.

We are eliciting elements and constructs in home visits to the parents,
collecting data for one 10x10 grid about the parent themselves, and another
10x10 grid about their perceptions of their child and other children in a
range of situations. RepGrid2 will elicit contructs from triads of
elements, immediately presenting eaach construct as a thick vertical line
with two elements at one end and one element at the other. Labels for all
the other elements start off parked at the side of the screen, and "rating"
consists of dragging the element labels to appropriate positions along the
line. It is very intuitive and easy, and if people want to change their
minds about the ordering or position of some or all of the elements along
the line, including moving the elements of the eliciting triad away from
the ends of the construct continuum, they can do so.

After the initial elicitation, the program will point out any elements that
have very similar ratings on the constructs so far elicited, and ask
whether there is any other construct that might be included and on which
these elements would score differently. Similarly, it wil point out any
pair of constructs that have been used very similarly, and ask if there is
any additional element which would discriminate between these. I had felt a
bit anxious about our grids being so small (10x10), but this feature of
Repgrid2 seems very good in helping us to know whether the domains have or
have not been sampled adequately. From our point of view, the nice thing
is that we can have at least this level of checking out of the grid with
the participant, without, at that point, showing them the entire analysis.
(Our intention is to have an additional session, after Time 2, when we do
show them the full grids, from before and after the intervention, each with
its rows and columns rearranged in the light of the hierarchical cluster
analysis, and see what they think of them and of the analyses.)

Structuring the interview by using the elicitation screens of Repgrid2, on
the the laptop computer, as the shared focus of the interviewer and the
participant, seems a very natural and easy way of of doing it. I seems to
make it easy for the the interviewer to be hepful and encouraging yet
unobtrusive and not directive.

2) Question about RepGrid2 cluster analysis

The Repgrid2 cluster analysis procedure is called "Focus". When you do a
Focus analysis, there is a prelimiinary screen with various boxes to check.
Two of them, which are alternatives, are labeled "Edges" and "Interior". I
got the impression that these offered a choice between alternative
clustering algorithms, with "Edges" probably selecting the Single Link
Algorithm and "Interior" probably selecting some other algorithm such as
the Mean Link Algorothm. Trying out checking one or the other of these
boxes, I find that they certainly do make a difference, but I have been
utterly dumbfounded to find that choosing one or the other option, with an
already-elicited grid, changes some of the the actual ratings of particular
elements on particular cnstructs.

Clearly I have misinterpreted the function of these check-boxes! Can
anybody enlighten me?

(In case the program is misbehaving, I should perhaps say that we are using
21-point scales. The participant will not see these numbers until we show
them the grids after Time 2, but the choice of 21 levels ensures that when
an element-label is dragged to its position along the construct-line, it
stays where it is put - it doesn't snap to a coarsely-quantised pasition.)

But what do "edges" anmd "interior " mean?


Tony Downing,
Dept. of Psychology, University of Newcastle upon Tyne, England.
Phone: +44 (0) 191 222 6184
Fax: +44 (0) 191 222 5622