2
votes

I am working with a data table that has groups of data and for each a position (from -1000 to +1000) and a count for each position. A small example looks this this:

dt.ex <- data.table(newID=rep(c("A","B"), each = 6), pos=rep(c(-2:3), 2), count= sample(c(1:100), 12))
    newID pos count
 1:     A  -2    29
 2:     A  -1    32
 3:     A   0    33
 4:     A   1    45
 5:     A   2    51
 6:     A   3    26
 7:     B  -2    22
 8:     B  -1    79
 9:     B   0     2
10:     B   1    48
11:     B   2    87
12:     B   3    38

What I want to do is to calculate the mean (or sum) between every n rows for each group of newID. That is, split into n rows and aggregate the results. This would be output assuming n=3 and summing:

newID pos count
    A  -2    94
    A   1   122
    B  -2   103 
    B   1   173

And I honestly have no idea on how to start without resorting some kind of looping - not advisable for a 67094000 x 3 table. If I wanted to calculate per newID only, something like this would do the trick but I am yet to see a solution that comes close to answering my question. Plyr solutions are also welcome although I feel it might be too slow for this.

2
In your output, you do not have the mean of pos, but instead the lowest/first value...?Frank
how many unique newIDs do you have in your data?Arun

2 Answers

2
votes

An alternate way (without using .SD) would be:

dt.ex[, seq := (seq_len(.N)-1) %/% 3, by=newID][, 
      list(pos = mean(pos), count=sum(count)), list(newID, seq)]

Benchmarking on (relatively) bigger data:

set.seed(45)
get_grps <- function() paste(sample(letters, 5, TRUE), collapse="")
grps <- unique(replicate(1e4, get_grps()))

dt.in <- data.table(newID = sample(grps, 6e6, TRUE), 
                 pos = sample(-1000:1000, 6e6, TRUE), 
                 count = runif(6e6))
setkey(dt.in, newID)

require(microbenchmark)
eddi <- function(dt) {
   dt[, .SD[, list(pos = mean(pos), count = sum(count)), 
          by = seq(0, .N-1) %/% 3], by = newID]
}

arun <- function(dt) {
    dt[, seq := (seq_len(.N)-1) %/% 3, by=newID][, 
      list(pos = mean(pos), count=sum(count)), list(newID, seq)]
}

microbenchmark(o1 <- eddi(copy(dt.in)), o2 <- arun(copy(dt.in)), times=2)


Unit: seconds
                    expr      min       lq   median       uq      max neval
 o1 <- eddi(copy(dt.in)) 25.23282 25.23282 26.16009 27.08736 27.08736     2
 o2 <- arun(copy(dt.in)) 13.59597 13.59597 14.41190 15.22783 15.22783     2
1
votes

Try this:

dt.ex[, .SD[, list(pos = mean(pos), count = sum(count)),
              by = seq(0, .N-1) %/% 3],
        by = newID]

Note that the parent data.table's .N is used in the nested by, because .N only exists in the j-expression.