Re: Checking out the grid with the subject

Barry Cooper (bcooper@newville.u-net.com)
Mon, 31 May 1999 20:53:21 +0100

Tony Downing writes:
When it comes to plans, such as ours, to use rep. grids in investigate, in
>as idiographic a way as possible, the nature of the cognitive changes that
>we hope will be produced by a training course, there is a possible
>problem. The rep grid procedure itself may well do some good! This is
>fine for the individual but not fine for assessing the ways in which the
>training, rather than the rep. grid, affects their construct system. In
>this particular study, we don't want to find ourselves having evaluated the
>effect of repertory grid therapy!
<snip>

I was fascinated by this as I have been tussling with similar thoughts
around my own research design. Devi has since responded in greater detail to
make the point about the fundamental difference between Kelly's Constructive
Alternativism and a research endeavour founded on positivist 'testing' type
principles.
I suppose Tony, that if you need to do a test of whether the training
'works' then probably a rep grid is rather a participative, interactive tool
[when applied according to Kelly's philosophy of C.A., and let's face it,
not everybody does use rep grids in the Kellian way!].
But why not accept the rep grid as an intervention or part of the training?
This rampant variabilisation seems a hopeless quest in PCP world! The value
of an idiographic tool is that it produces qualitatively rich data which
doesn't _have_ to be quantitatively analysed although I recognise from this
discussion list that many people feel an irresistable need to get into
measurement and quantification.
Without wanting to challenge your research purpose I wonder who the research
is for? If the subjects are to be 'included' , as they cannot fail to be in
a properly and sympathetically administered rep grid, I suspect that they
would greatly enrich the reliability and validity of the data through a
'checking out' of the meaning of their constructs. A content analysis may
well indicate what aspects of the training, if any[!], are being construed
and if so in what ways is it being 'meaningful' to them as parents?
This probably hasn't helped but your posting prompted me to address the
extraordinary quantification that goes on around grids and which all seems
so singularly unpersuasive when read against the richness of Kelly's
philosophy.
Nonetheless, I may well e-mail you directly as your research has much in
common with mine!
Best wishes,
Barry Cooper.

barry_cooper@bristol-city.gov.uk

-----Original Message-----
From: Tony Downing <A.C.Downing@ncl.ac.uk>
To: pcp@mailbase.ac.uk <pcp@mailbase.ac.uk>
Date: Sunday, May 30, 1999 2:19 PM
Subject: Checking out the grid with the subject

>Devi raises the profound point about rep grids, that:
>
>>a high matching score between any two elements, or constructs, is just a
>> number representing a rating. If you want to draw inferences about about
>> a person's structure in general, rather than simply as shown by the
>>ratings, it's often faster to ask the person. Particularly, a high
>>matching score between two constructs may _not_ mean that the constructs
>>are causally linked ("whenever you think a - not a, you also tend to
>>think b - not b; does that mean that b - not b is implicationally
>>dependent on a - not a?"), but simply associated without cause; or,
>>indeed, simply coincidentally present, a function of how adequately you'd
>>sampled the whole realm of discourse in choosing the elements you used.
>
>This problem, that the specific instances that you sample, with some
>purpose in mind, always comes bundled with a lot of particular features
>which are not what you're after, is a special case of the general research
>design problem of confounding with extraneous variables. In most kinds of
>research, it's dealt with by having samples big enough so that all these
>particular but irrelevant characteristics ("error variance") tend to cancel
>out. In rep grid work it couold in principle be dealt with, presumbly, by
>having gigantic grid, so that each _kind_ of element that you'd be
>interested in would be represented by a lot of _particular_ elements - but
>presumably that is just not practicable.
>
>In that case, Devi's recommendation, that we check out with the
>subject/client that small element distances or high construct correlations
>are not just arising by chance because of unforseen quirks of the
>particular choice of element, seem very important.
>
>When it comes to plans, such as ours, to use rep. grids in investigate, in
>as idiographic a way as possible, the nature of the cognitive changes that
>we hope will be produced by a training course, there is a possible
>problem. The rep grid procedure itself may well do some good! This is
>fine for the individual but not fine for assessing the ways in which the
>training, rather than the rep. grid, affects their construct system. In
>this particular study, we don't want to find ourselves having evaluated the
>effect of repertory grid therapy! Presmably, if we discuss the way the
>analysis comes out with the participants immediately (rather than in
>sessions after the end of the course) the risk of the rep. grid experience
>itself producing change is much greater.
>
>Does this mean it's not such a bright idea after all, to use rep. grid
>methods to investigate these changes?
>
>
>Tony Downing,
>Dept. of Psychology, University of Newcastle upon Tyne, England.
>A.C.Downing@ncl.ac.uk
>
>
>

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%