1
votes

I'am working on a Monte Carlo visualisation in PowerBI. So far i was able to perform the Monte Carlo simulation, but i'am stuck on creating the visualisation.

I created a table with two columns, iteration and result, which can have any number of rows (let's say 1.000 for the example). I want to display the result in a bar graph of about 20 bars, however, since the result can be any number i have to group the results on 20 ranges, which i will display in the graph. I have all of this working, i get a nice table with a name(which is a textual representation of the range), the min value, max value, and a count of how many iterations are within this range.

But then i noticed the total count of iterations does not match 1.000, I checked why this could be, and i noticed multiple variables seemed to be recalculated. So i created some test code to see what happened:

    tableMinGevolg = Record.Field(Table.Min(MonteCarlo_runs, "gevolgTotaal"), "gevolgTotaal"),
    test1 = tableMinGevolg,
    test2 = tableMinGevolg,
    test3 = tableMinGevolg,

Running this code can/will result in 3 different values for the test variables. So i went digging and discovered that Power Query M uses lazy evaluation for Record and Table expressions. Since Monte Carlo simulations uses randomness i assume this causes my problem. Everytime the table is referenced the values are evaluated and the randomness can create different results, which messes with my results.

I would like to know if it is possible to force eager evaluation on a variable so i can be sure the Monte Carlo simulation is not performed again while i'am calculating these ranges?

1

1 Answers

1
votes

Try using the Table.Buffer function to load the table into memory.

tableMinGevolg = Table.Buffer(Record.Field(Table.Min(MonteCarlo_runs, "gevolgTotaal"), "gevolgTotaal"))