Corresponding Regressions

Lois Shawver (
Wed, 20 Mar 1996 09:02:05 -0800 (PST)


You said:
> I think you've got the essential idea of polarization. Do you
> acknowledge that this polarization allows us to determine which
> of two variables is derived from the other (Y from X1).
> If you do, then you are agreeing to something very profound.
> If the method of corresponding regressions really does
> this, then we can unfold the genetic roots of constucts.
> No other grid method can do this. And people outside of PCP have
> wanted something like this for a very long time.

No, I'm afraid I am not there yet. I'm not even sure what you
mean when you say that we can "determine which of two variables is
derived from the other". In some of your posts you seem to be saying
that you are not presuming that what you are calling Y is truly the
dependent variable, and that you are running more than one regression,
plugging in various variables as the IVs or DVs, in order to determine
which variable is Y. I don't see how you can use several regressions to
do this, if that is in fact what you are doing. Moreover, from
the account above, and some of the things you have said about the
weaving of constructs, I am not sure if you are not suggesting that
one of the X variables can be derived from the other.

This gets back to another question I have asked you and hope you
will answer. Why are you using the phrase "formal cause" and "formal
effect". This is not a conceptual challenge, and I don't have the
intention of holding you to Aristotle's definition of "formal cause".
I just want to know how you are using this term in your theory, what
implications, or connotations, it has that you desire and feel it
brings to your theory. Why not just say "cause"?

Here's an example of something you say that puzzles me. Let me
present it here, but I note that you use the phrase "formally cause" and
I want to pin you down to explaining more precisely what you mean by
that. I have a hunch it's important in understanding what your theory is

You said:
> For example, right now I am helping
> a nursing professor with her study of burn
> out in nurses. She has a couple of
> dependent variable measures of burnout.
> She has about 20 variables she thinks
> contribute to the generation of burnout.
> With out corresponding regressions, her
> analysis will be amibiguous and probably
> ignored. She can put the predictors of
> burnout in a stepwise regression, but this
> will not tell her which predictors formally
> cause (are building blocks) of burnout.
> Some of her predictors may only be
> correlated dependent variables of burnout.
> Theories to put into LISREL are a dime a
> dozen and the results are speculative.
> The professor also can't really go out and
> subject all these nurses to many years of
> differering conditions in order to
> experimentally see which ones cause
> burnout. Corresponding regressions should
> tell her which of the predictors formally
> cause burnout, however, just by measuring
> existing variables.

So, corresponding regressions (running the regressions with various
data serving as dependent variables?) help me to detect which is
the true dependent variable? When dependent variable is defined
as the "formal effect" of the "formal cause"?

Here's a quote from a previous note on corresponding regressions that
suggest some of my interpretation above. Let me take you through my
reading of it and maybe you can see where I need help in understanding
what you're saying. (Your comments you appended to this in the
posting here were simply your account of polarization).

You said:
> NOTE: In the following x and y are not
> necessarily the same as X1, X2 or Y as
> described in the Core posting. Each variable
> is, in turned, treated as the x and y
> in order to discover which is the otherwise
> unknown X1, X2 and Y.

> ***************************************
> The procedure "requires conducting two
> regression analyses on the same variables,
> letting each serve as the predictor of the
> other in turn. First, either an x or y is
> used to predict the other, with the
> understanding that in eventual applied
> research their status as IVs and DVs will
> not be known. As the regression models
> are developed, the prediction errors are
> derived by subtracting the predicted values
> from the actual values of the predicted
> variable.

For each of the equations? Now, these variables are rankings the
subject has made of the importance of these three variables, on a
scale of 1-9?

You said:
> These errors of prediction are then
> converted to absolute values in order to
> reflect the extremity of the errors.

Again, for each variable as it serves as a predicted variable? Are
we going to conclude that the equation that predicts the best
shows us the formal cause and formal effect?

You said:
> Next, the
> absolute values of the deviations from the
> mean of the predictor variable are determined
> in order to reflect the extremity of the
> predictor values.

If we have a lot of variance in our predictor variables (X1 & X2) we
have seen that these will give midrange predictor amounts. So, we
can use the absolute difference in predictor variables to predict if
the predicted variable will be extremely high or low.

You said:
> The correlation between
> these absolute deviations and the absolute
> errors is found. When the predictor
> variable is x this correlation is symbolised
> as rde(x)- the correlation between absolute
> deviations from the mean of x and the
> absolute errors predicting y. When y is the
> predictor variable this index is referred to
> as rde(y)- the correlation between absolute
> deviations from the mean of y and the
> absolute errors from predicting x.

I think that paragraph is really confusing.

You said:
> The assessment of asymmetrical relations comes
> from a comparison of rde(x) and rde(y). The
> rde correlation should be more negative when
> the real dependent variable serves as the
> predictor."

What do you mean by asymetrical relations? Not following the previous
paragraph, I am lost here.

..Lois Shawver