15
votes

I have a big dataset with 100 variables and 3000 observations. I want to detect those variables (columns) which are highly correlated or redundant and so remove the dimensonality in the dataframe. I tried this but it calculates only the correlation between one column and the others; and I always get an error message

for(i in 1:ncol(predicteurs)){
correlations <- cor(predicteurs[,i],predicteurs[,2])
names(correlations[which.max(abs(correlations))])
}

  Warning messages:
 1: In cor(predicteurs[, i], predicteurs[, 2]) :
the standard deviation is zero
  2: In cor(predicteurs[, i], predicteurs[, 2]) :
 the standard deviation is zero

Can anyone help me?

4

4 Answers

31
votes

Updated for newer tidyverse packages..

I would try gathering a correlation matrix.

# install.packages(c('tibble', 'dplyr', 'tidyr'))
library(tibble)
library(dplyr)
library(tidyr)

d <- data.frame(x1=rnorm(10),
                x2=rnorm(10),
                x3=rnorm(10))

d2 <- d %>% 
  as.matrix %>%
  cor %>%
  as.data.frame %>%
  rownames_to_column(var = 'var1') %>%
  gather(var2, value, -var1)

  var1 var2       value
1   x1   x1  1.00000000
2   x1   x2 -0.05936703
3   x1   x3 -0.37479619
4   x2   x1 -0.05936703
5   x2   x2  1.00000000
6   x2   x3  0.43716004
7   x3   x1 -0.37479619
8   x3   x2  0.43716004
9   x3   x3  1.00000000

# .5 is an arbitrary number
filter(d2, value > .5)

# remove duplicates
d2 %>%
  mutate(var_order = paste(var1, var2) %>%
           strsplit(split = ' ') %>%
           map_chr( ~ sort(.x) %>% 
                      paste(collapse = ' '))) %>%
  mutate(cnt = 1) %>%
  group_by(var_order) %>%
  mutate(cumsum = cumsum(cnt)) %>%
  filter(cumsum != 2) %>%
  ungroup %>%
  select(-var_order, -cnt, -cumsum)

  var1  var2   value
1 x1    x1     1     
2 x1    x2    -0.0594
3 x1    x3    -0.375 
4 x2    x2     1     
5 x2    x3     0.437 
6 x3    x3     1     
10
votes

Another approach that looks valid could be:

set.seed(101)
mat = matrix(runif(12), 3)
cor_mat = cor(mat)
cor_mat
#           [,1]       [,2]       [,3]       [,4]
#[1,]  1.0000000  0.1050075  0.9159599 -0.5108936
#[2,]  0.1050075  1.0000000  0.4952340 -0.9085390
#[3,]  0.9159599  0.4952340  1.0000000 -0.8129071
#[4,] -0.5108936 -0.9085390 -0.8129071  1.0000000
which(cor_mat > 0.15 & lower.tri(cor_mat), arr.ind = T, useNames = F)
#     [,1] [,2]
#[1,]    3    1
#[2,]    3    2
8
votes

I had the very same issue and here's how I solved it:

install.packages("Hmisc") # Only run on first use
library(Hmisc)
rawdata <- read.csv("/path/to/your/datafile", sep="\t", stringsAsFactors=FALSE) # In my case the separator in the file was "\t", adjust accordingly.
ccs <- as.matrix(rawdata)
rcorr(ccs, type="pearson") # You can also use "spearman"

This has the advantage over the other methods that it will output your correlation values and the respective p-values.

1
votes

You can use corrr package. For example:

corrr::correlate(your_data, method = "pearson")