Monday, 13 February 2017

Test Contrasts Among a Set of Correlated Correlations (Meng, Rosenthal, & Rubin, 1992, using equation 6): Play along in R




Imagine for a moment that your research involves many participants, each measured on an array of predictor variables and a focal criterion variable. Imagine that, working in exploratory mode, you correlate each variable with the focal criterion variable. After you do this, you want to know whether one of the variables is more strongly associated with the criterion than the others - or a number of others. Alternatively, imagine that, working in confirmatory mode, you have a theory that predicts one variable will be more strongly associated with the criterion variable than the others.

How do you go about testing either of these things? Well, thanks to Meng, Rosenthal, and Rubin (1992), you might do it by testing contrasts among a set of correlated correlations with the following equation:


Where the upside down y is contrast weight for each X, the zr is the fisher's z transform for X on Y, rx is the median inter-correlation between Xs, and is found by equations 2 and 3. So the full thing boils down to three equations:



We can use the code from my previous blog to find h. So that's sorted. It's just the rest we need to worry about.

The following code allows you to play along with the example in the paper. As per usual, it can be very easily modified to use with you own data. Just plug in your own coefficients, the N, and median inter-correlation. Then plug in the your contrast weights in the second step. Important: As with most of the R scripts on this blog, it wont work if the psych package is not installed and loaded.

So did you want to test whether one thing you're interested in is correlated with something to a greater or lesser extent than a bunch of other things? You can do that with this method.



#Testing a Contrast Among CORRELATED correlation coefficients as in Meng et al. (1992).
#Meng, X. L., Rosenthal, R., & Rubin, D. B. (1992).
#Comparing correlated correlation coefficients. Psychological bulletin, 111(1), 172.
#http://psycnet.apa.org/journals/bul/111/1/172/
#ABSTRACT
#Provides simple but accurate methods for comparing correlation coefficients between a dependent variable and
#a set of independent variables. The methods are simple extensions of O. J. Dunn and V. A. Clark's (1969) work
#using the Fisher z transformation and include a test and confidence interval for comparing 2 correlated
#correlations, a test for heterogeneity, and a test and confidence interval for a contrast among
#k (>2) correlated correlations. Also briefly discussed is why the traditional Hotelling's t test
#for comparing correlations is generally not appropriate in practice.
#The following code is based on equations 2, 3, 5. and 6. Allows the user to perform heterogeneity test
# of k correlated coefficients and contrast among them. Useful when theory dictates that correlations
#should be non-uniform and when theory dictates some should be less/more strongly correlated with
#a dependent variable
#Contrast test involves two steps. The first, we have to find h. This is done using the test of heterogeneity.
#The second is the contrast test itself. The following code consists of the first and second steps.
#STEP 1 -- STEP 1 -- STEP 1 -- STEP 1 -- STEP 1 -- STEP 1 --
#The method of finding h is as follows. Start coefficients, ns and median intercorrelations,
#are those in the computational examples given in this paper.
#Replace for own data, changing coefficients, n, median intercorrelation, contrast weights as appropriate.
#This part creates a table containing the data we will need for later computations
cors <- c(.63, .53, .54, -.03) # put correlation coefficients (Xs -> Y) in here
zs <- fisherz(cors) #fisherz transforms
rs <- fisherz2r(zs) #transformations back
round(zs,2)
mz <- mean(zs) #mean fisher z
zmz <- zs-mz #fisher z minus mean fisher z
zmz2 <- zmz*zmz #fisher z minus mean fisher z squared
cw <- c(1, 1, 1, -3) #put contrast weights in here, presently 3 to 1
cwzs <- cw*zs #contrast weights multiplied by fisher z
cw2 <-cw^2 #contrast weight square
n <- 15 #n - put n here
r <- c(.63, .53, .54, -.03) # put correlation coefficients (Xs -> Y) in here
rc <- matrix(r.con(r,n),ncol=2) #confidence intervals of r
t <- r*sqrt(n-2)/sqrt(1-r^2) #t
p <- (1-pt(t,n-2))/2 #p
square <-r*r #r squared
r.rc <- data.frame(r=r,z=fisherz(r),lower=rc[,1],upper=rc[,2],t=t,p=p,rsquare=square,zmz=zmz, zmz2 = zmz2, cw=cw, cwzs=cwzs, cw2=cw2)
round(r.rc,3) #creates data frame r.rc
#average r square
avrsqur <- mean(square)
#sum of fishers z- mean fishers z
sumzmz <- sum(zmz2)
#enter median intercorrelation of Xs
mi <- (.37) # INPUT MEDIAN CORRELATION HERE
#find f
f <- (1-mi)/(2*(1-avrsqur))
f
#0.4158828, .4159 to 2dp as given in example
#find h
h <- (1-(f*avrsqur))/(1-avrsqur)
h
#1.187071, 1.1871 to 2dp as given in example
#find chi-square (k-1) # THIS CHI-SQUARE IS A TEST OF THE HETEROGENEITY OF THE CORRELATED COEFFICIENTS
chisqu1 <- (n-3)*sumzmz
chisqu2 <- (1-mi)*h
chisqu <- chisqu1/chisqu2
chisqu
# 5.711426
#FIND P for CHISQU = statistic and df
pchisq(chisqu,3, lower.tail=FALSE)
#STEP 2 -- STEP 2 -- STEP 2 -- STEP 2 -- STEP 2 -- STEP 2
#we know have chi-square and h, success! We need to take h an do some other stuff with it for our contrast test.
scwzs <- sum(cwzs) #sum of contrast weights fisher z transform (first part of test) #2.025 in example
denom1 <- sum(cw2) #sum of contrast weights squared
denom2 <- 1 - mi #1 - median intercorrelation of X's
ct1 <- (n-3)/ (denom1*denom2*h)
ct2 <- sqrt(ct1)
contrastz <- scwzs*ct2
#PRINT Z FOR CONTRAST, THIS IS YOUR Z TEST FOR CONTRAST AMONG CORRELATED CORRELATION COEFFICIENTS.
contrastz
# in Mengs example = 2.34 (2dp)
#FIND one-tailed p for z
pnorm(-abs(contrastz))
#Created by Adam Pegler, 10.02.2017, email adamjpegler@gmail.com for questions, comments or complaints

No comments:

Post a Comment