I have a dataframe of BLAST HSPs (not all columns displayed):
query.id subject.id alignment.length
196 1032519524 1032519523 212
197 1032519524 1032519523 182
198 1032519524 1032519522 212
199 1032519524 1032519522 182
200 1032519524 1032518642 212
and would like to collapse the data so I end up with unique pairs of query.id and subject.id. If there is multiple rows with the same query.id and subject.id, the values for alignment.length should be added:
query.id subject.id alignment.length
1 1032519524 1032518642 212
2 1032519524 1032519522 394
3 1032519524 1032519523 394
I do this with a neat one-liner using plyr
:
ddply(blast.results, c("query.id", "subject.id"), function(x)colSums(x['alignment.length']))
Unfortunately, this becomes prohibitive when processing a few hundreds of thousands of BLAST results. Is there a faster and more scalable approach?
Microbenchmark for @PoGibas data.table solutions:
Unit: milliseconds
expr
setDT(blast.results)[, .(alignment.length = sum(alignment.length)), .(query.id, subject.id)]
setkey(setDT(blast.results), query.id, subject.id)[, .(alignment.length = sum(alignment.length)), .(query.id, subject.id)]
100
min lq mean median uq max neval cld
11.514016 18.010048 31.61341262 22.0045935 32.104018 222.943783 100 b
15.308905 22.620595 36.32531007 28.2132725 43.527390 156.109477 100 b
0.000012 0.000185 0.00033057 0.0003635 0.000443 0.000772 100 a