Prerequisites:
GKE of 1.14* or 1.15* latest stable
labeled node pools, created by Deployment manager
An application, which requires persistence volume in RWO mode
Each deployments of applications is differ, should be run at the same time with others, and in the 1 pod per 1 node state.
Each pod has no replicas, should support rolling updates (by helm).
Design:
Deployment manager template for cluster and node pools,
node pools are labeled, each node has the same label (after initial creating)
each new app deploying into new namespace, what allows to have unique service address,
each new release could be 'new install' or 'update existing', based on the node label (nodes labels could be changed by kubectl during install or update of the app)
Problem:
That is working normally if cluster is created from browser console interface. If cluster was created by GCP deployment, the error is (tested on the nginx template from k8s docs with node affinity, even without drive attached):
Warning FailedScheduling 17s (x2 over 17s) default-scheduler 0/2 nodes are available: 2 node(s) didn't match node selector.
Normal NotTriggerScaleUp 14s cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 2 node(s) didn't match node selector
What is the problem? Deployment manager creates bad labels?
affinity used:
# affinity:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: node/nodeisbusy
# operator: NotIn
# values:
# - busy