Tuesday 10 January 2017

Under What Conditions will Meng's test of Correlated Coefficients Come out p <.05?

Meng's test of correlated correlation coefficients (Meng, Rosenthal, & Rubin, 1992) was the focus of my last blog. It looks like this:


Where h  is derived from this:



Which looks a little scary to the uninitiated, but its actually quite a simple thing.

When you plug in all the required values you end up with a chi-square value. To draw conclusions about the heterogeneity of a set of correlated correlation coefficients, you will likely consult the significance of this chi-square value. If p is less than .05, you may well decide that the set of coefficients are heterogeneous.

Imagine a study where you have many individuals measured on many different things. Let's say (1) IQ, (2) GPA, (3) blood pressure, (4) the number of dogs owned, and  (5) how many episodes of Westworld they've watched. In addition you have a shared outcome variable of interest, let's say, procedural learning. What you want to known is whether the linear associations between the 5 predictor variables and the outcome variables are differential. That's where this test might come in handy.

But under what circumstances will you get a p less than .05, when you test the heterogeneity of 5 predictor variables with one outcome variable? What will the correlations "look" like?

A quick simulation might help answer these questions. To be sure, significance, as it always is, will be highly dependent on sample size (all else equal, the higher the sample size the more likely a nice low p). So we need to keep the sample size constant in our simulation. Let's say a sample probably quite typical of a lot of the psychological research I read, n = 200.

Ok so our n is 200. Let's try correlations of exactly the same magnitude as a baseline and a median inter-correlation of an identical figure too (that means the median of the correlations between all five Xs - in this case the 15 correlations which comprise the above or below the diagonal of the correlation matrix of the five variables).We'll go with .50 for all. Which is a large effect size according to another classic paper from the same year (Cohen, 1992).

cors <- c(.50, .50, .50, .50)
mi <- (.50)
n <- 200

The coefficients are the same. Unsurprisingly, we get a chi-square of 0 with df = 4. Which, of course, is insignificant.

Now let's mix it up a bit. Lets have 5 correlation coefficients drawn randomly from a sample with a mean of .50 and a SD of 5. We can do this using this piece of code in R rnorm(5, mean = 50, sd = 5).

cors <- c(.51, .52, .54, .43, .53)
mi <- (.50)
n <- 200

And we get a chi-square of 4.21, again df = 4, which is p = .38. So we're getting there.

Let's crank the SD up to 10. All else constant. rnorm(5, mean = 50, sd = 10).

cors <- c(.49, .51, .60, .41, .45)
mi <- (.50)
n <- 200

And we get a chi-square of 12.26, again df = 4, which is... p = .015. Significant! We're there already. Mission accomplished. With an SD of 10 we've got a highly significant test of heterogeneity. We might then conclude on the basis of this that we have heterogeneous correlated correlation coefficients. In other words (and to return to my not ridiculous example study) the linear associations between IQ, GPA, blood pressure, the number of dogs owned, and the number of episodes of Westworld watched with procedural learning are differential. This is fantastic news if that's what we've hypothesized 😄. Or if we haven't hypothesized that but will do now 😉.

This will probably get very boring very quickly, but lets now look at how the median inter-correlation between the Xs affects our results.

Let's set the median inter-correlation to .30 (medium) for our SD 10 array of coefficients.

cors <- c(.49, .51, .60, .41, .45)
mi <- (.30)
n <- 200

Now that's a chi-square of 9.08, which is p = .059. So now it's marginal. Maybe you'll run with that.

Let's go down a notch to median inter-correlation .20

cors <- c(.49, .51, .60, .41, .45)
mi <- (.20)
n <- 200

Now that's a chi-square of 8.09 which is p = .088. Hold on that's not good.

Take it down one more.

cors <- c(.49, .51, .60, .41, .45)
mi <- (.10)
n <- 200

Oh dear. Chi-square of 7.33, which is p = .119. That's definitely not good 😞.

So what this little exercise shows is that the likelihood of Meng's test of heterogeneity being p less than .05 appears to increase as the standard deviation of the coefficients increases. Which makes perfect sense. And also as the the median inter-correlation of the Xs increases too. Which makes sense also, but I'm trying to get my head around that.

Flipped the other way, the less variance in your coefficients and the less they are themselves interrelated, the less chance you have of a p < .05.

When our Xs are inter-correlated with a large effect size (median r = .50) all we need to have is coefficients with an SD of 10 for a highly significant chi-square. And the chance of a significant chi-square will only increase as sample size increases. With an n of 1000 with our highly significant example (the one above, where we got a p = .001), we get a chi-square of 92.09(!!!), which is just completely off the charts in terms of significance (p < .00001). Now this is important, because in the Meng et al. (1992) publication their example is four obviously heterogeneous coefficients = .63, .53, .54, and - .03 (I mean, what on earth is this guy doing at the end). But what I have shown here is that they don't need to be anywhere as obviously different for a pleasing p.

Here's the code for this little statistical foray:

No comments:

Post a Comment