0
votes

I am trying to upgrade Kubernetes Cluster from 1.11 to 1.12. I have followed proper steps and reached here:

[root@ip-10-0-1-124 a10-harmony-controller-5.0.0]# kubeadm upgrade apply v1.12.3 --force --config=/tmp/a10_setup/multi_master/config.yaml 
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration options from a file: /tmp/a10_setup/multi_master/config.yaml
[upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file.
[upgrade/version] You have chosen to change the cluster version to "v1.12.3"
[upgrade/versions] Cluster version: v1.11.0
[upgrade/versions] kubeadm version: v1.12.3
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.12.3"...

It is stuck here and not moving forward. The reason for being stuck here, I could find out using the log level v9. It is not able to find the kubeapiserver pod in kube-system namespace.

I0902 09:46:51.194839  616837 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"kube-apiserver-ip-10-0-1-124.ec2.internal\" not found","reason":"NotFound","details":{"name":"kube-apiserver-ip-10-0-1-124.ec2.internal","kind":"pods"},"code":404}
I0902 09:46:51.692437  616837 round_trippers.go:386] curl -k -v -XGET  -H "User-Agent: kubeadm/v1.12.3 (linux/amd64) kubernetes/435f92c" -H "Accept: application/json, */*" 'https://10.0.1.124:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-10-0-1-124.ec2.internal'

The kube-apiserver in my kube-system has name kube-apiserver-10.0.1.124 while the upgrade script is searching with the name kube-apiserver-ip-10-0-1-124.ec2.internal . The script is trying to append the hostname after the kube-apiserver, while I have nodeName defined with name 10.0.1.124

Here is my config I am using for upgrade:

apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
api:
  advertiseAddress: 10.0.1.124
  controlPlaneEndpoint: 10.0.1.124
etcd:
  endpoints:
  - https://10.0.1.124:2379
  - https://10.0.1.231:2379
  - https://10.0.1.30:2379
  caFile: /etc/kubernetes/pki/etcd/ca.pem
  certFile: /etc/kubernetes/pki/etcd/client.pem
  keyFile: /etc/kubernetes/pki/etcd/client-key.pem
networking:
  podSubnet: 192.168.12.0/24
kubernetesVersion: 1.12.3
apiServerCertSANs:
- 10.0.1.124
apiServerExtraArgs:
  endpoint-reconciler-type: lease
nodeName: 10.0.1.124

Can I use some parameter which will let upgrade script search for the right name? How can I resolve this issue?

2

2 Answers

2
votes

The issue was when your kubernetes master nodename is not equal to hostname, kubeadm upgrade script could not find out your node name. It is trying to search it using the hostname which is by default. As a workaround, you can provide custom config during kubeadm upgrade as follows:

kubectl -n kube-system get cm kubeadm-config -o jsonpath={.data.MasterConfiguration} > config.yaml

At the end of config.yaml, add following block:

nodeRegistration: 
  name: <node-name>

Now, when you try to upgrade using above config file your upgrade went through:

root@ip-10-0-1-124 centos]# kubeadm upgrade apply v1.12.3 --config config.yaml
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration options from a file: config.yaml
[upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file.
[upgrade/version] You have chosen to change the cluster version to "v1.12.3"
[upgrade/versions] Cluster version: v1.11.0
[upgrade/versions] kubeadm version: v1.12.3
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "10.0.1.124" as an annotation
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.3". Enjoy!

Posting this as an answer in case if someone else also faces same issue.

1
votes

It's a bit outdated.

Please respect the minor version so for v1.11.0 pleas try and upgrade to v1.12.0. You should consider and use one of the newest supported releases Kubernetes version and version skew support policy and Supported Versions of the Kubernetes Documentation

For the newest release there are much more specific information:

All containers are restarted after upgrade, because the container spec hash value is changed. You only can upgrade from one MINOR version to the next MINOR version, or between PATCH versions of the same MINOR. That is, you cannot skip MINOR versions when you upgrade. For example, you can upgrade from 1.y to 1.y+1, but not from 1.y to 1.y+2.

Additional outdated information Please try and perform upgrade to the + 1 minor version from 1.11.0 to 1.12.0 Other help-full commands:

kubeadm upgrade plan.
kubeadm upgrade --force [in order to Recovery from a failure state]

I am not sure it's good practice to use --force option during upgrade apply:

Force upgrading although some requirements might not be met. This also implies non-interactive mode.

You can also use:

--dry-run Do not change any state, just output what actions would be performed
--diff Show what differences would be applied to existing static pod manifests. See also: kubeadm upgrade apply –dry-run

Hope this help.