7
votes

Deployed elastic search Kubernetes in GKE. With 2GB memory and 1GB persistence disk.

We got an error out of storage exception. After that, we have Increased to 2GB on the next day itself it reached 2GB, but we haven’t run any big queries. Then again we have increased the persistence disk size to 10 GB. After that, there is no increase in the data persistence disk storage.

On further analysis, we have found total Indices take 20MB of memory unable to what are the data in the disk.

Used elastic search nodes stats API to get the details on disk and node statistics.

I am unable to find the exact reason why memory exceeds and what are the data in the disk. Also, suggest ways to prevent this future.

screen shot

1
I would say that 2GB is simple not enough for elasticsearch, as far as I checked in this elasticsearch helm chart the default size set is 30GB, so I would say the solution here would be to increase the amount of persistence disk. Let me know what you think about that. - Jakub

1 Answers

1
votes

It is continuously receiving data and based on your config it creates multiple copies of indices and may create a new index daily. Check the config file.

if the elasticsearch cluster fails each time it creates a backup of data so you may need to delete old backups before restarting the cluster.