Monday 30 January 2017

Zou's (2007) MA Method for Confidence Intervals for the Difference Between two Overlapping Correlation Coefficients Made Easier in R

The Pearson's r quantifies the linear association between two variables. It's hard to imagine personality and social psychology without it. It's quite simple thing, and it emerged in earnest way back in the 1880s. It looks like this* (for a sample):


And you can perform this simple equation for t value:


Sometimes researchers are interested in how Pearson's r correlation coefficients compare. Perhaps they are interested in whether two or more coefficients are different, because their theory says they should be. They probably wont want to do this by simply looking at the two coefficients and seeing whether they are nonidentical. No, that won't fly at the journals, so they'll want to do this by using some kind of statistical procedure.

Until recently, I hadn't realized that were quite so many ways of comparing correlated correlation coefficients (I did a blog on testing heterogeneity if you're interested). According to Meng, Rosenthal & Rubin (1992), Hotelling's t test was the most popular method of comparing two correlated (i.e. overlapping) correlation coefficients until the early 90s. Despite the problems with it, that were noted by Steiger (1980). The psych package contains two tests from that era, the William's test and another of Steiger's (but i'm not sure precisely which). The latter you can do on this cool webpage  made by Ihno Lee and Kristopher Preacher.

But since that time at least two alternative methods have been put forward for use. First, that of Meng, Rosenthal, and Rubin (1992): a Z-test type method "equivalent to Dunn & Clark's test asymptotically but is in a rather simple and thus easy-to-use form" (p. 172).

Second, and the subject of this blog, that of Zou (2007). A technique described by its author as a "modified asymptotic" (MA) method. This one's not so much a test, but a method for creating confidence intervals around the difference between two correlated correlation coefficients. As encouraged by Zou, you can use it in a test-like way by checking whether or not the lower limit includes 0.

I'm not sure if it's implemented anywhere else, but having an R script helps to understand the mechanics of it anyway, so here's how you can do it quickly in R. Download this R script from my dropbox and follow the instructions. The code is set-up for the example given by Zou, but just substitute in your coefficients and 95% confidence intervals for the focal coefficients.

You're best off just following that script, but in short, from the kick off you need to know three things.  The two correlated coefficients you're interested in, r12, r13 (same Xs, different Ys) and the correlation between the two Ys, given as r23. 

r12 <- .396
r13 <- .179
r23 <- .088

From this we need to find Fisher z 95% confidence interval lower and upper bounds around the two correlation coefficients we're interested in. That's four things. We call them: l1, u1 (lower and upper bounds for the first coefficient), l2, and u2 (lower and upper bounds for the second coefficient). This is really easy in R (e.g. r.con(r12, 66, p = .95). Plug them in in this bit:

l1 <- .170
u1 <- .582
l2 <- -.066
u2 <- .404

Then we need to square r12 and r13. And then square and cube r23.

Then we need to find the correlation between the two correlations, which you find with this (which involves some r squares and an r cubed too):


Then we chuck all of that in this, for the lower bound:


And the same for the upper bound (the same thing equation but a little different):



If you do each step sequentially in the R script you'll end up with an upper and lower confidence interval for the difference between two correlated correlation coefficients as presented in Zou (2007). Run through it with the Zou example, and then plug yours in.

If you missed the link above, download the script again here

Zou, G. Y. (2007). Toward using confidence intervals to compare correlations. Psychological Methods12(4), 399-413.

*(equations courtesy of the wikipedia page)


2 comments:

  1. I implemented some of the functions in my book and provided R code for others here:

    https://seriousstats.wordpress.com/2012/02/05/comparing-correlations/

    I also linked to an excel spreadsheet for non R users. It is also worth looking at Wilcox's functions in some cases (see notes the accompanying blog. Also bear in mind the general problem comparing standardised effects between studies and samples.

    ReplyDelete
    Replies
    1. Thanks for the link Thom! Your R code is a hell of a lot better than mine. Think I might have to buy your book!

      Delete