I am writing a parallelized julia code for monte carlo simulations. This requires me to generate random numbers in parallel on different cores. In a simple test code on my workstation, I tried to generate random numbers on 4 cores and got the following results:
julia -p 4
julia> @everywhere using Random
julia> @everywhere x = randn(1)
julia> remotecall_fetch(println,1,x[1])
-1.9348951407543997
julia> remotecall_fetch(println,2,x[1])
From worker 2: -1.9348951407543997
julia> remotecall_fetch(println,3,x[1])
From worker 3: -1.9348951407543997
julia> remotecall_fetch(println,4,x[1])
From worker 4: -1.9348951407543997
I do not understand why the numbers fetched from different processes give exactly the same result. I am not sure what the mistake is. My understanding is that using the @everywhere macro lets you run the same piece of code on all the processes in parallel. I am currently julia 1.6.0 on my computer. Thank you.
UPDATES: Thank you for the responses. Basically, what I am looking for is an assignment statement like x = y, where x and y are both local to a worker processes. I tried something like this:
julia -p 4
@sync @distributed for i = 1:2
x = randn(1)
println(x)
end
From worker 3: [0.4451131733445428]
From worker 2: [-0.4875627629008678]
Task (done) @0x00007f1d92037340
julia> remotecall_fetch(println,2,x)
ERROR: UndefVarError: x not defined
Stacktrace:
[1] top-level scope
@ REPL[23]:1
This seems to generate the random numbers independently on each process. However, I do not know how to access the variable x
anymore. I tried remotecall_fetch(println, 2,x)
but the variable x
seems to be not defined on the worker processes. This has been super confusing.
I wish there was nice flowchart or good documentation explaining the scope of variables and expressions in Julia during parallel computations.