1
votes

I have a question about finding the two largest values of column C, for each unique ID in column A, then calculating the mean of column B. A sample of my data is here:

ID  layer   weight
1   0.6843629   0.35
1   0.6360772   0.70
1   0.6392318   0.14
2   0.3848640   0.05
2   0.3882660   0.30
2   0.3877026   0.10
2   0.3964194   0.60
2   0.4273218   0.02
2   0.3869507   0.12
3   0.4748541   0.07
3   0.5853659   0.42
3   0.5383678   0.10
3   0.6060287   0.60
4   0.4859274   0.08
4   0.4720740   0.48
4   0.5126481   0.08
4   0.5280899   0.48
5   0.7492097   0.07
5   0.7220433   0.35
5   0.8750000   0.10
5   0.8302752   0.50
6   0.4306283   0.10
6   0.4890895   0.25
6   0.3790714   0.20
6   0.5139686   0.50
6   0.3885678   0.02
6   0.4706815   0.05

For each ID, I want to calculate the mean value of layer, using only the rows where with the two highest weights.

I can do this with the following code in R:

ind.max1 <- ddply(index1, "ID", function(x) x[which.max(x$weight),]) 
    dt1 <- data.table(index1, key=c("layer"))
    dt2 <- data.table(ind.max1, key=c("layer"))
    index2 <- dt1[!dt2]
    ind.max2 <- ddply(index2, "ID", function(x) x[which.max(x$weight),])
ind.max.all <- merge(ind.max1, ind.max2, all=TRUE)
ind.ndvi.mean <- as.data.frame(tapply(ind.max.all$layer, list(ind.max.all$ID), mean))

This uses ddply to select the first highest weight value per ID and put into a dataframe with layer. Then remove these highest weight values from the original dataframe using data.table. I then repeat the ddply select max value, and merge the two max weight value dataframes into one. Finally, computing mean with tapply. There must be a more efficient way to do this. Does anyone have any insight? Cheers.

3
Are the two largest values guaranteed to be unique?Glen_b

3 Answers

3
votes

You could use data.table

 library(data.table)
 setDT(dat)[, mean(layer[order(-weight)[1:2]]), by=ID]
 #   ID Meanlayer
 #1:  1 0.6602200
 #2:  2 0.3923427
 #3:  3 0.5956973
 #4:  4 0.5000819
 #5:  5 0.7761593
 #6:  6 0.5015291
  • Order weight column in descending order(-weight)
  • Select first two from the order created [1:2] by group ID
  • subset the corresponding layer row based on the index layer[order..]
  • Do the mean

Alternatively, in 1.9.3 (current development version) or from the next version on, a function setorder is exported for reordering data.tables in any order, by reference:

require(data.table) ## 1.9.3+
setorder(setDT(dat), ID, -weight) ## dat is now reordered as we require
dat[, mean(layer[1:min(.N, 2L)]), by=ID]

By ordering first, we avoid the call to order() for each group (unique value in ID). This'll be more advantageous with more groups. And setorder() is much more efficient than order() as it doesn't need to create a copy of your data.

1
votes

This actually is a question for StackOverflow... anyway! Don't know if the version below is efficient enough for you...

s.ind<-tapply(df$weight,df$ID,function(x) order(x,decreasing=T))
val<-tapply(df$layer,df$ID,function(x) x)

foo<-function(x,y) list(x[y][1:2])
lapply(mapply(foo,val,s.ind),mean)
0
votes

I think this will do it. Assuming the data is called dat,

> sapply(split(dat, dat$ID), function(x) { 
      with(x, {
          mean(layer[ weight %in% rev(sort(weight))[1:2] ])
          })
      })
#         1         2         3         4         5         6 
# 0.6602200 0.3923427 0.5956973 0.5000819 0.7761593 0.5015291 

You'll likely want to include na.rm = TRUE as the second argument to mean to account for any rows that contain NA values.

Alternatively, mapply is probably faster, and has the exact same code just in a different order,

mapply(function(x) { 
      with(x, {
          mean(layer[ weight %in% rev(sort(weight))[1:2] ])
          })
      }, split(dat, dat$ID))