I have a 2-node ES cluster (Elastic Cloud) with 60GB heap size. Following are my indexes and number of shards allocated.
green open prod-master-account 6 0 6871735 99067 4.9gb 4.9gb
green open prod-master-categories 1 1 221 6 3.5mb 1.7mb
green open prod-v1-apac 4 1 10123830 1405510 11.4gb 5.6gb
green open prod-v1-emea 9 1 28608447 2405254 30.6gb 15gb
green open prod-v1-global 10 1 94955647 12548946 128.1gb 61.2gb
green open prod-v1-latam 2 1 4398361 471038 4.7gb 2.3gb
green open prod-v1-noram 9 1 51933712 6188480 60.1gb 29.2gb
The JVM memory is above 60%. I want to downgrade this cluster to a lower heap size. But it fails each time and gives a circuit-breaker due to the JVM memory high.
I want to know why the JVM memory is still high? How can I keep the JVM memory low? Am I doing something wrong with sharding?
As the guides says to keep 20 shards per GB, Looking at my configurations its under those values. How can I downgrade this cluster to a lower heap size cluster?
Much appreciated!