Like any other software, Kubernetes reveals some security breaches from time to time. On November 26th, 2018 CVE-2018-1002105 was published with a critical score of 9,8 (10 is maximum).
With a specially crafted request, users that are authorized to establish a connection through the Kubernetes API server to a backend server can then send arbitrary requests over the same connection directly to that backend, authenticated with the Kubernetes API server’s TLS credentials used to establish the backend connection.
Every team running one of the following Kubernetes version, should update immediately:
- Kubernetes v1.0.x-1.9.x
- Kubernetes v1.10.0-1.10.10 (fixed in v1.10.11)
- Kubernetes v1.11.0-1.11.4 (fixed in v1.11.5)
- Kubernetes v1.12.0-1.12.2 (fixed in v1.12.3)
In our case, we decided to upgrade to v1.11.6.
For a good reason, kubeadm, kubelet and kubectl won’t be updated automatically by the system’s package manager (line 6):
$ cat /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 exclude=kube* gpgcheck=1 repo_gpgcheck=0 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
We need to add a small switch to the yum update command to run an update on those three packages:
yum update -y kubelet kubeadm kubectl --disableexcludes=kubernetes
Let me add a small note here. If you’ve set custom KUBELET_EXTRA_ARGS in /etc/sysconfig/kubelet, check it after the above update has been done. The kubelet RPM package has still an issue on that.
After package update has been done, proceed to step 2:
kubeadm upgrade plan
This does some pre update checks. If everything finishes without errors, we can start the actual upgrade with step 3:
kubeadm upgrade apply v1.11.6
Don’t worry if this takes some time and if you see some timeout errors.
Afterwards, kubeadm tells you that the upgrade succeeded successfully (hopefully).
Let’s do some post upgrade tasks:
systemctl daemon-reload systemctl restart kubelet
Wait some seconds and continue with an update of your CNI (Container Network Interface). This step highly depends on which CNI you’re using. In our case (calico), we need to do:
$ kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml clusterrole.rbac.authorization.k8s.io/calico-node configured clusterrolebinding.rbac.authorization.k8s.io/calico-node configured
and
$ kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml configmap/calico-config unchanged service/calico-typha unchanged deployment.apps/calico-typha configured poddisruptionbudget.policy/calico-typha unchanged daemonset.extensions/calico-node configured serviceaccount/calico-node unchanged customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
That’s it. We ready with our master node.
Our nodes need to be updated, too. Start with draining a node from the master:
$ kubectl drain ip-10-10-12-70.eu-central-1.compute.internal --ignore-daemonsets --delete-local-data
Switch to the node now and run the system’s package upgrade:
$ yum update kubelet-1.11.6-0 kubeadm-1.11.6-0 kubectl-1.11.6-0 --disableexcludes=kubernetes
Then upgrade the node configuration:
$ kubeadm upgrade node config --kubelet-version $(kubelet --version | cut -d ' ' -f 2)
and restart kubelet:
systemctl daemon-reload systemctl restart kubelet
Finally, don’t forget to „undrain“ the freshly updated node from the master again:
$ kubectl uncordon ip-10-10-12-70.eu-central-1.compute.internal
That’s it again.
Check your node’s version and verify that all pods, daemonsets etc. are running.