Re: your research efforts

anima@devi.demon.co.uk
Sun, 16 Oct 1994 16:00:57 +0000

In responding to a posting by A.J.Zolten on 14 Oct which included

>> I wonder if you have considered using the Rorchach percepts as
>> elements in your construct elicitation technique? More specifically,
>> take the first 15 or so percepts and place them as elements across
>> the top of the grid, and ask your subjects to come up with their
>> bipolar constructs.

Jack Gerber wrote,

>I'm not completely sure I understand the suggestion. I'm a bit rusty on
>the technique as I have been involved with other work for several years.
>If I understand this correctly, you are suggesting I take some of the
>subject's own responses and let them come up with the pertinant scales
>(constructs) to rate them on as in a classic repertory grid.
>
>The problem with this is that this is a highly individualized technique
>which would not allow comparisons to ratings given by other subjects. It
>wouldn't shed light on whether subjects placed percepts in the same area
>on a common construct space. The original hypothesis was that the
>responses subjects chose have something in common which would be revealed
>by a common placement in the "space" created by the underlying factors.
>Unless different subjects can be compared by their placement of the
>percepts in the same semantic space, there is no way to evaluate this.
>
>As I write this response I realize that I could ask subjects to develop a
>series of constructs for thier percepts and then find out what all the
>-constructs- had in common. It might reveal the same thing. If I then used
>the constructs in a multi-dimensional scaling I might find a back door.
>
>I would appreciate any comments on this off-the-wall idea.

I've two comments to make.

a) Indeed, it's a highly individualised technique! Even after you've done
your content analysis of all individually-elicited constructs to find a set
which appear to share common meanings which you then give to your main
sample of respondents as a standard set of constructs, you've only your
faith that the different respondents will understand the standard set in
the same way! But, courage mes braves, isn't that the assumption we all
make when using supplied constructs with large samples of respondents?

b) Borman addressed the same difficulty in devising standard rating scales
for employee performance appraisal: how could he be sure that a particular
construct in a standard set meant the same to different respondents? If I
remember correctly, he did the following.
- piloted a standard set of adjectives relevant to his topic of interest
(which happened to be "leadership qualities" in an army officer sample)
- asked respondents to what extent each adjective in the set applied to a
given construct by using a one-to-ten score (1 = unrelated 10 = highly
related, say)
- defined the meaning of that construct to each respondent in terms of the
respondent's string of scores across the adjective set
- defined the extent to which any two respondents had the same meaning for
a given construct in terms of the correlation coefficient calculated
between their two strings of scores
- doing so across the whole sample to discover which constructs were
understood the same way by large numbers of respondents

Elegant, but of course it only pushes back the problem of meaning one step
into the background since he had to assume that the adjectives themselves
were understood similarly by all respondents. I don't recall how he derived
the original adjective set or how he justified calling them "standard"; (I
doubt it involved the amount of prior research which led Osgood to the
"standard" Semantic Differential scales). On the other hand, the adjectives
are standard only to a given topic, army leadership behaviour, and this
specificity to a given topic is perhaps more plausible to those of a PCP
persuasion than the generality claimed by Osgood.) And it _was_ a sincere
and powerful attempt to go beyond the wretched little "5-point scale"
approach that is still common in so much of the work in performance
appraisal.

Colleagues wishing to follow up might care to search out the following
references:

Borman, W.C. (1983). "Implications of personality theory
and research for the rating of work performance in
organizations". In F.Landy, S.Zedeck, & J.Cleveland (eds.),
Performance Measurement and Theory. Hillsdale, NJ:
Lawrence Erlbaum.
Borman, W.C. (1987). "Personal constructs, performance
schemata, and 'folk theories' of subordinate
effectiveness: explorations in an army officer sample".
Organizational Behavior and Human Decision Processes,
40, 307-322.

There's a bit more about his technique in my own

Jankowicz A.D."Applications of personal construct
psychology in business practice" in Neimeyer G. &
Neimeyer R. (eds.) Advances in Personal Construct
Psychology Greenwich, Conn.: JAI Press, 1990, vol. 1.

Maybe Borman's approach will trigger ideas even if you don't find his
specific technique helpful....

Regards to all,

Devi Jankowicz