1
votes

I have a 2-node ES cluster (Elastic Cloud) with 60GB heap size. Following are my indexes and number of shards allocated.

green open prod-master-account       6 0  6871735    99067   4.9gb  4.9gb 
green open prod-master-categories    1 1      221        6   3.5mb   1.7mb 
green open prod-v1-apac              4 1 10123830  1405510  11.4gb   5.6gb 
green open prod-v1-emea              9 1 28608447  2405254  30.6gb    15gb 
green open prod-v1-global           10 1 94955647 12548946 128.1gb  61.2gb 
green open prod-v1-latam             2 1  4398361   471038   4.7gb   2.3gb 
green open prod-v1-noram             9 1 51933712  6188480  60.1gb  29.2gb

The JVM memory is above 60%. I want to downgrade this cluster to a lower heap size. But it fails each time and gives a circuit-breaker due to the JVM memory high.

I want to know why the JVM memory is still high? How can I keep the JVM memory low? Am I doing something wrong with sharding?

As the guides says to keep 20 shards per GB, Looking at my configurations its under those values. How can I downgrade this cluster to a lower heap size cluster?

Much appreciated!

1
Any luck?? please go through my answer and let me know if you need more info.user156327

1 Answers

0
votes

60 GB of HEAP size is not at all recommended for ES process or any other JVM process as beyond 32 GB, JVM doesn't use the compressed object pointers (compressed oops), so you won't get optimal performance.

Please refer to ES official doc on heap setting for more info.

You can try below to optimize the ES heap size

  1. If you have big RAM machines, then try to use the mid-size machine, where you allocate 50% of RAM to ES heap size(should not cross 32 GB heap size threshold).
  2. Assign less primary shards and increase replica shards for better search performance.