If I have the possibility to configure spark with a very high amount of memory - how much should I use?
Some people say that any more memory than 32 GB / executor will not be helpful, as JVM addresses can't be compressed).
Assuming I could have about 200 GB of memory for spark /node should I create an executor fore ach 32 GB RAM, i.e. have multiple executors per worker? Or is it better to have a really big amount of RAM per node?