stack
generally outperforms melt
.
A straightforward approach to this problem with stack
would be:
dt[, stack(.SD), by = "uid"]
Of course, you can specify your .SDcols
if necessary. And then, use setnames()
to change the names to whatever you want.
(Self-promotion alert)
I wrote some functions and put them in a package called "splitstackshape". One of the functions is called Stacked()
, and in the 1.2.0 version of the "splitstackshape" package, should work very fast.
It's a little bit different from just stacking all the remaining columns in a data.table
. It is more analogous to base R's reshape()
than melt()
from "reshape2". Here's an example of Stacked()
in action.
I've created a decently large data.table
to do this test. There are 50 numeric columns we want to stack, and 50 factor columns we want to stack. I've also further optimized @Andreas's answer.
The data
set.seed(1)
m1 <- matrix(rnorm(10000*50), ncol = 50)
m2 <- matrix(sample(LETTERS, 10000*50, replace = TRUE), ncol = 50)
colnames(m1) <- paste("varA", sprintf("%02d", 1:50), sep = "_")
colnames(m2) <- paste("varB", sprintf("%02d", 1:50), sep = "_")
dt <- data.table(uid = 1:10000, m1, m2)
The functions for benchmarking
test1 <- function() Stacked(dt, "uid", c("varA", "varB"), "_")
## merged.stack
test2 <- function() merged.stack(dt, "uid", c("varA", "varB"), "_")
## unlist(..., use.names = TRUE) -- OPTIMIZED
test3 <- function() {
list(cbind(dt[, "uid", with = FALSE],
dt[, list(variable = rep(names(.SD), each = nrow(dt)),
value = unlist(.SD)),
.SDcols = 2:51]),
cbind(dt[, "uid", with = FALSE],
dt[, list(variable = rep(names(.SD), each = nrow(dt)),
value = unlist(.SD)),
.SDcols = 52:101]))
}
## unlist(..., use.names = FALSE) -- OPTIMIZED
test4 <- function() {
list(cbind(dt[, "uid", with = FALSE],
dt[, list(variable = rep(names(.SD), each = nrow(dt)),
value = unlist(.SD, use.names = FALSE)),
.SDcols = 2:51]),
cbind(dt[, "uid", with = FALSE],
dt[, list(variable = rep(names(.SD), each = nrow(dt)),
value = unlist(.SD, use.names = FALSE)),
.SDcols = 52:101]))
}
## Andreas's current answer
test5 <- function() {
list(dt[, list(variable = names(.SD),
value = unlist(.SD, use.names = FALSE)),
by = uid, .SDcols = 2:51],
dt[, list(variable = names(.SD),
value = unlist(.SD, use.names = FALSE)),
by = uid, .SDcols = 52:101])
}
The results
library(microbenchmark)
microbenchmark(Stacked = test1(), merged.stack = test2(),
unlist.namesT = test3(), unlist.namesF = test4(),
AndreasAns = test5(), times = 3)
# Unit: milliseconds
# expr min lq median uq max neval
# Stacked 391.3251 393.0976 394.8702 421.4185 447.9668 3
# merged.stack 764.3071 769.6935 775.0799 867.2638 959.4477 3
# unlist.namesT 1680.0610 1761.9701 1843.8791 1881.9722 1920.0653 3
# unlist.namesF 215.0827 242.7748 270.4669 270.6944 270.9218 3
# AndreasAns 16193.5084 16249.5797 16305.6510 16793.3832 17281.1154 3
^^ I'm not sure why Andreas's current answer is so slow here. The "optimization" I did was basically to unlist
without using by
, which made a huge difference on the "varB" (factor) columns.
The manual approach is still faster than the functions from "splitstackshape", but these are milliseconds we're talking about, and some pretty compact one-liner code!
Sample output
For reference, here is what the output of Stacked()
looks like. It's a list
of "stacked" data.table
s, one list item for each stacked variable.
test1()
# $varA
# uid .time_1 varA
# 1: 1 01 -0.6264538
# 2: 1 02 -0.8043316
# 3: 1 03 0.2353485
# 4: 1 04 0.6179223
# 5: 1 05 -0.2212571
# ---
# 499996: 10000 46 -0.6859073
# 499997: 10000 47 -0.9763478
# 499998: 10000 48 0.6579464
# 499999: 10000 49 0.7741840
# 500000: 10000 50 0.5195232
#
# $varB
# uid .time_1 varB
# 1: 1 01 D
# 2: 1 02 A
# 3: 1 03 S
# 4: 1 04 L
# 5: 1 05 T
# ---
# 499996: 10000 46 A
# 499997: 10000 47 W
# 499998: 10000 48 H
# 499999: 10000 49 U
# 500000: 10000 50 W
And, here is what the merged.stack
output looks like. It's similar to what you would get when you use reshape(..., direction = "long")
from base R.
test2()
# uid .time_1 varA varB
# 1: 1 01 -0.6264538 D
# 2: 1 02 -0.8043316 A
# 3: 1 03 0.2353485 S
# 4: 1 04 0.6179223 L
# 5: 1 05 -0.2212571 T
# ---
# 499996: 10000 46 -0.6859073 A
# 499997: 10000 47 -0.9763478 W
# 499998: 10000 48 0.6579464 H
# 499999: 10000 49 0.7741840 U
# 500000: 10000 50 0.5195232 W
stack
as an alternative tomelt
? Or perhapsunlist
(but I thinkstack
is faster). – A5C1D2H2I1M1N2O1R2T1dt[, stack(.SD), by = "uid"]
, assuming that everything else comprises.SDCols
. – A5C1D2H2I1M1N2O1R2T1