I would like to know the factors which AKS nodes considering for reserve memory and how it calculates the allocateable memory.
In my cluster we have multiple nodes with (2 CPU, 7 GB RAM).
What I observed is all the nodes (18+) showing only 4 GB allocateable memory out of 7 GB. And due to this our cluster has resource conjunction for new deployments. Due to it we have to increase node count accordingly to meet the resource requirement.
Updated as i commented below adding the kubectl top node commands below. Here its strange that how nodes consumption % can be more than 100 %.
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
aks-nodepool1-xxxxxxxxx-vmssxxxx00 265m 13% 2429Mi 53%
aks-nodepool1-xxxxxxxxx-vmssxxxx01 239m 12% 3283Mi 71%
aks-nodepool1-xxxxxxxxx-vmssxxxx0g 465m 24% 4987Mi 109%
aks-nodepool2-xxxxxxxxx-vmssxxxx8i 64m 3% 3085Mi 67%
aks-nodepool2-xxxxxxxxx-vmssxxxx8p 114m 6% 5320Mi 116%
aks-nodepool2-xxxxxxxxx-vmssxxxx9n 105m 5% 2715Mi 59%
aks-nodepool2-xxxxxxxxx-vmssxxxxaa 134m 7% 5216Mi 114%
aks-nodepool2-xxxxxxxxx-vmssxxxxat 179m 9% 5498Mi 120%
aks-nodepool2-xxxxxxxxx-vmssxxxxaz 141m 7% 4769Mi 104%
aks-nodepool2-xxxxxxxxx-vmssxxxxb0 72m 3% 1972Mi 43%
aks-nodepool2-xxxxxxxxx-vmssxxxxb1 133m 7% 3684Mi 80%
aks-nodepool2-xxxxxxxxx-vmssxxxxb3 182m 9% 5294Mi 115%
aks-nodepool2-xxxxxxxxx-vmssxxxxb4 133m 7% 5009Mi 109%
aks-nodepool2-xxxxxxxxx-vmssxxxxbj 68m 3% 1783Mi 39%
So here I took example of aks-nodepool2-xxxxxxxxx-vmssxxxx8p 114m 6% 5320Mi 116% node
I calculated the memory usage of each pod in that node which was ,the total around 4.1 GB , and nodes allocatable memory was 4.6 GB out of 7GB actual.
Here "why the top node" output is not same as the each pods "top pods output" in that node ?
expected % == 4.1GB/4.6 GB== 93% But top node command gives output as 116%