I am running a series of large simulations over a grid. I am implementing the simulations by row and I have found that my sampling functions are a bottleneck. I've attempted to use the foreach and doMC libraries to speed up the process, but I've found that either the parallel method is slower or I've been unable to code a function that would would be correctly interpreted by foreach.
Looking at some other posts, It appears that my approach using foreach may be misguided in that the number of jobs I'm attempting greatly exceeds the number of available processors. I'm wondering if folks would have some suggestions for how to best implement parallelization in my situation. My simulations generally come in two types. In the first I calculate a matrix that contains a sampling interval (rows) for each element within the grid-row that I am processing. I then sample using runif (in the real simulations my rows contain ~ 9000 cells and I'm performing 10000 simulations).
#number of simulations per element
n = 5
#Generate an example sampling interval.
m.int1 <- matrix ( seq ( 1, 20, 1 ), ncol=10, nrow=2 )
#Define a function to sample over the interval defined in m.int1
f.rand1 <- function(a) {
return ( runif ( n, a[1], a[2] ) )
}
#run the simulation with each columns corresponding to the row element and rows
#the simultions.
sim1 <- round( apply ( m.int1, 2, f.rand1 ) )
In the second case, I am attempting to sample from a set of empirical distributions that are indexed by column in a matrix. The value of the grid-row element corresponds to the column to be sampled.
#number of simulations per element
n = 5
#generate a vector represeting a row of grid values
v.int2 <- round(runif(10,1,3))
#define matrix of data that contains the distributions to be sampled.
m.samples<-cbind(rep(5,10),rep(4,10),rep(3,10))
f.sample <- function(a) {
return ( sample ( m.samples [ ,a], n, ) )
}
#Sample m.samples indexed by column number.
sim2<- sapply(v.int2,f.sample)
In the second example, I was able to utilize foreach() and %dopar% to run in parallel, but the simulation took substantially longer than the serial code. In the case of the first example above, I could not write a proper function to take advantage of foreach parallelization. I'll put the code I used in the second case just to demonstrate my thinking - but I now realize that my approach is too costly in overhead.
library(foreach)
library(doMC)
registerDoMC(2)
n = 5
#Sample m.samples indexed by column number using parallel method.
sim2.par <- foreach ( i = 1 : length ( v.int2 ),
.combine="cbind") %dopar% sample (
m.samples [ , v.int2 [i] ] , n )
I'd appreciate some suggestions on an approach (and some code!) that would help me utilizing parallelization efficiently. Again, the rows I'm processing generally contain about 9000 elements and we're conducting 10000 simulations per element. So my output simulation matrices are generally on the order of 10000 X 9000. Thanks for your help.