re: double-loop learning
Thu, 31 Oct 1996 22:32:35 +0100

On this topic, Gary Blanchard writes:

>I gather you are concerned with the value and significance of 'double -
>loop learning,' in and of itself...AND with it as a symptom of the lack
>of value of much of the field of Oganizational Behavior/Training &
>Organization Development.
>Am I correct here?

Yes indeed.

>as a fellow practitioner of the field for the past 30 years, I would be
>pleased to
>offer some ideas and comments for consideration.

And there were enough hints from both Robin and Magnus that the concept was
worth exploring, and you've written some textbooks yourself: so, yes
please, Gary, correct my excesses by all means.

Meanwhile, Robin and Magnus both hinted that my ideas on this topic might
perhaps need a wee bit more explication; both reiterated my own views about
management training concepts, but cautioned me about being too pernickety
about conceptual looseness, along the lines that even loose thinking can
have an heuristic use in training situations. Sure 'nuff.

But perhaps I can say a bit more about my own thinking concerning "double
loops"; (not exactly my own thinking, but what I've understood to be of
value from Gordon Pask's ideas.) And then a bit about the analogies to pcp.
Here goes.

1. The Finite State Machine
In general cybernetic thinking, this is a system in which the current state
is predictable from its previous state and previous input, or,
S(t) = f [ S(t-1), I(t-1) ]
Here, a "state" is conceptualised as a string of _particular_ values for
each and every one of its elements defining the present status of the
To give a purely invented example, suppose we wish to say something about
what makes an employee satisfied, and our knowledge is such that we
conceptualise job satisfaction as a system comprising the elements of
variable A = Reward, which takes let's say 3 values "insufficient",
"tolerable", and "generous"
variable B = Expectancy, i.e. perception of the chances of a particular
action being rewarded, which takes, say, 2 values "likely", "unlikely".

Then one particular state S1 for this simple system would be,
(A = "sufficient", but B = "unlikely")
and another might be S2,
(A = "sufficient", and B = "likely").

If (and only if!) one felt that the employee can be modelled as a Finite
State Machine, one would then start asking questions about what sort of
Input (I) would lead the person from State S1 to State S2, and one would be
making some fairly firm predictions that one could predict future States
(identified onto the variable called "Effort") as some function of current
State and current Input.

If you felt like it, you could for example express Vroom's thinking about
motivation in these terms; you'd realise that Equity Theory demanded a more
complex specification of "State" which included some element
C = whether the person perceives him/her-self as better rewarded or less
well rewarded than people doing similar work given their Efforts, and so

And the simple equation is formally identical to Miller, Galanter &
Pribram's notion of the TOTE unit (remember? Test-Operate-Test-Exit, a
simple feedback loop which checks whether desired goals have yet been
achieved, or whether they require further Operations to be achieved).


2. The Finite Function Machine,
as Pask conceptualised it, focusses our attention on the function "f" in
the simple expression
S(t) = f [ S(t-1), I(t-1) ].
This kind of system is more complex: in some circumstances the function "f"
may apply, but in some other circumstances there is a _different_
relationship between current State + Input and future State, such that a
different function "g" applies.

e.g. In some circumstances you can explain current job satisfaction by
seeing it take a value equal to f = 0.5 of the previous State x Input
combination, as it were, while in others, you can explain current job
satisfaction by seeing it take a _different_ value, g, which might = 1.5.
What's being said is of the kind "in a recession there is one sort of
effective reward x expectancy combination given the Inputs; in times of
plenty a completely different sort of reward x expectancy combination given
the same Input makes for effectiveness in motivating people."

All well and good. Now, what I see as the power of this simple model stems
from the following considerations (and still taking my ideas from Pask):

a) for the system as a whole to be as it is, there must be some
superordinate current-future State relationship which _selects_ the
function to apply: it asks the question "do we choose function "f" or
function "g" to model what is happening?
b) this can be seen
(cripes! I wish I could draw this! and no, I can't attach a PICT file to
this e-mail cos I run a Macintosh and you all run DOS/Windows machines out
there, which tend to screw up my PICT attachments to e-mails)
as a TOTE unit which sits on top of the TOTE unit expressing the
lower-level finite state machine. Basically, the higher level's Operate
activity is to select either function "f" or function "g" to the lower
level; and the lower level's Operate activity selects States given Inputs.
There's a diagram of this contraption in either of the two Jankowicz
references I gave in my last e-mail, shout if you want the references
c) cricially, the activities of the higher-level part of the system can't
be explained, understood, modelled, by the lower-level part of the system;
they operate at a higher level of language. In Contingency Theory, you
can't understand why your explanations don't work without being aware that
they _are_ contingent on some wider circumstances not explicable in terms
of the basic explanations per se. What Godel was on about is that any
language is incomplete in that it cannot be self-referential, i.e. can't
entirely explain itself in its own terms, but can only be explained
_within_ a superordinate system. A function cannot be a member of its own
range and domain.


Now, here's where cybernetics and me part company: I'm not enough of a
logician to take this further (though I understand that the last statement
about functions not being members of their own range and domain is true
only of the Neumann-Bernays formulation and doesn't necessarily apply to
the approach taken in Quine's "New Foundations". Help! I'll take that as
Gospel but ignore it until a friendly mathematical logician points out any
handy implications, and pass on rapidly.)

But what _does_ strike me as very relevant to the present discussion is
that double-loop learning, if the wretched concept is to mean anything at
all beyond being yet another re-description of stuff we know very well,
becomes useful if we see the "second loop" as being a way of "talking
about", "making sense of", "being capable of selecting among", states of
the "first loop" in a learning system.

This is powerful. It allows me to make statements like:
"Yes, I know I can learn how to run this organisation better if I take note
of the consequences of my actions and adjust my future actions to bring the
results closer to my intended objectives... BUT BUT BUT conditions,
say, of rapid change, sheer control by feedback isn't enough, and I have to
be able to decide when existing action-outcome combinations are
insufficient, and I have to develop and use _very different_ action-outcome
combinations by thinking "of-and-about" them"
"I have to have a different (superordinate) kind of knowledge about the
rules of change in order to do so"
"Strategic Management differs from operational management because it
doesn't depend on simple feedback mechanisms; it must _anticipate_ drastic
changes sufficiently to alter the ways in which the organisation normally


Phew! And what, after all this, is the link to personal construct theory?

Well, I find it useful to remember the Organisation Corollary and all that
stuff about some constructs being in a superordinate relationship to
others. Particularly, that it is in the nature of values (core constructs)
that they govern the ways in which subordinate constructs are deployed in
given circumstances; and that you can't effectively change peoples' core
constructs but, in order to bring about the major changes involved in
therapy, counselling or training, one way is to engage in
experiments-with-life in which the client is encouraged to generate and
explore new, alternative (lower-level) constructs which provide expression
to existing core constructs _in different ways_. Management development and
organisation development are surely ways of doing that: they're not
attempts to change personal values. (Are they?)

"Resistance to change" thereby becomes a way of understanding why people
find it functional to resist threats to their core constructs, rather than
(as in so much of the stuff that passes for Organization Behaviour textbook
exposition) a way of blaming employees because the buggers _persist_ in
their old ways instead of going along with what the company, or Trainer,
tells them is really in their own best interests. (And I'm on occasion a
Trainer myself, so I've no particular axe to grind here.)

And (in some rather sloppy way or other in my thinking) there follows the
_integrity_ of the person as his or her own Finite Function Machine,
capable of choosing actions based on values, rather than as a Finite State
Machine, limited to following some simple action-consequences feedback
loop. (Sorry about that word "Machine"; I know it seems a bit cold: it's
just a name for a particular analytic framework, that's all.)


Oh, and while I'm on the subject of Organization Behaviour texbook pop
concepts, beware the peddlers of "empowerment"! Remember the tyranny of the
year-end Balance Sheet, according to which even the most "empowered"
organisation will "downsize" (= "sack", "make redundant", "throw out on the
street") sufficient staff to make the books balance next year... bring back
Instrumental Motivation! (Or do away with the concept of motivation a la
Kelly and start asking more interesting questions...)

But that's another hobby-horse; enough for today.

Magnus, Robin, Garry, anyone else I haven't bored by this self-indulgence:
any comments? Over to you.

Kind regards,

Devi Jankowicz