Hide Forgot
+++ This bug was initially created as a clone of Bug #1915582 +++ carry: https://github.com/kubernetes/kubernetes/pull/97860 Fixes kubernetes#96459 Fixes kubernetes#97685
From PR https://github.com/kubernetes/kubernetes/pull/97860, it Fixes #97685 which descirbed Cluster was upgraded from 1.19.5 to 1.20.1 using kubeadm, kube-apiserver cpu usage went from 50% to 300%, let's confirm the fix was loaded in latest payload 4.6 and observe the cluster for a while and check performance and status. $ oc adm release info --commits registry.ci.openshift.org/ocp/release:4.6.0-0.nightly-2021-01-22-123731 | grep 'hyperkube' hyperkube https://github.com/openshift/kubernetes e09cf6e3abf33aca0ea49b5a349b34f11322f35b $ cd kubernetes/ $ git pull $ git log --date=local --pretty="%h %an %cd - %s" e09cf6e3 | grep '97860' 9e73d199359 Abu Kashem Wed Jan 13 08:24:33 2021 - UPSTREAM: 97860: move all variables in sampleAndWaterMarkHistograms::innerSet $ oc version -o yaml clientVersion: buildDate: "2020-12-18T00:30:44Z" compiler: gc gitCommit: 02c110006bfef4ba53fa5042bb9eae170dd3dc1c gitTreeState: clean gitVersion: 4.6.0-202012172338.p0-02c1100 goVersion: go1.15.5 major: "" minor: "" platform: linux/amd64 openshiftVersion: 4.6.0-0.nightly-2021-01-22-123731 serverVersion: buildDate: "2021-01-22T08:55:15Z" compiler: gc gitCommit: e09cf6e3abf33aca0ea49b5a349b34f11322f35b gitTreeState: clean gitVersion: v1.19.0+e09cf6e goVersion: go1.15.5 major: "1" minor: "19" platform: linux/amd64 $ oc get pods -n openshift-kube-apiserver -l apiserver NAME READY STATUS RESTARTS AGE kube-apiserver-kewang256x1-7snxz-control-plane-0 5/5 Running 0 59m kube-apiserver-kewang256x1-7snxz-control-plane-1 5/5 Running 0 62m kube-apiserver-kewang256x1-7snxz-control-plane-2 5/5 Running 0 65m $ oc adm top pod -n openshift-kube-apiserver NAME CPU(cores) MEMORY(bytes) kube-apiserver-kewang256x1-7snxz-control-plane-0 72m 777Mi kube-apiserver-kewang256x1-7snxz-control-plane-1 258m 1405Mi kube-apiserver-kewang256x1-7snxz-control-plane-2 265m 1581Mi $ oc adm top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% kewang256x1-7snxz-compute-0 662m 18% 2911Mi 42% kewang256x1-7snxz-compute-1 490m 14% 1955Mi 28% kewang256x1-7snxz-compute-2 686m 19% 3283Mi 47% kewang256x1-7snxz-control-plane-0 875m 11% 4476Mi 30% kewang256x1-7snxz-control-plane-1 694m 9% 3614Mi 24% kewang256x1-7snxz-control-plane-2 1298m 17% 5007Mi 33% $ oc get no NAME STATUS ROLES AGE VERSION kewang256x1-7snxz-compute-0 Ready worker 24h v1.19.0+3b01205 kewang256x1-7snxz-compute-1 Ready worker 24h v1.19.0+3b01205 kewang256x1-7snxz-compute-2 Ready worker 24h v1.19.0+3b01205 kewang256x1-7snxz-control-plane-0 Ready master 24h v1.19.0+3b01205 kewang256x1-7snxz-control-plane-1 Ready master 24h v1.19.0+3b01205 kewang256x1-7snxz-control-plane-2 Ready master 24h v1.19.0+3b01205 $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.6.0-0.nightly-2021-01-22-123731 True False False 24h cloud-credential 4.6.0-0.nightly-2021-01-22-123731 True False False 25h cluster-autoscaler 4.6.0-0.nightly-2021-01-22-123731 True False False 25h config-operator 4.6.0-0.nightly-2021-01-22-123731 True False False 25h console 4.6.0-0.nightly-2021-01-22-123731 True False False 24h csi-snapshot-controller 4.6.0-0.nightly-2021-01-22-123731 True False False 25h dns 4.6.0-0.nightly-2021-01-22-123731 True False False 25h etcd 4.6.0-0.nightly-2021-01-22-123731 True False False 25h image-registry 4.6.0-0.nightly-2021-01-22-123731 True False False 24h ingress 4.6.0-0.nightly-2021-01-22-123731 True False False 25h insights 4.6.0-0.nightly-2021-01-22-123731 True False False 25h kube-apiserver 4.6.0-0.nightly-2021-01-22-123731 True False False 25h kube-controller-manager 4.6.0-0.nightly-2021-01-22-123731 True False False 25h kube-scheduler 4.6.0-0.nightly-2021-01-22-123731 True False False 25h kube-storage-version-migrator 4.6.0-0.nightly-2021-01-22-123731 True False False 24h machine-api 4.6.0-0.nightly-2021-01-22-123731 True False False 25h machine-approver 4.6.0-0.nightly-2021-01-22-123731 True False False 25h machine-config 4.6.0-0.nightly-2021-01-22-123731 True False False 25h marketplace 4.6.0-0.nightly-2021-01-22-123731 True False False 24h monitoring 4.6.0-0.nightly-2021-01-22-123731 True False False 25h network 4.6.0-0.nightly-2021-01-22-123731 True False False 25h node-tuning 4.6.0-0.nightly-2021-01-22-123731 True False False 25h openshift-apiserver 4.6.0-0.nightly-2021-01-22-123731 True False False 25h openshift-controller-manager 4.6.0-0.nightly-2021-01-22-123731 True False False 25h openshift-samples 4.6.0-0.nightly-2021-01-22-123731 True False False 25h operator-lifecycle-manager 4.6.0-0.nightly-2021-01-22-123731 True False False 25h operator-lifecycle-manager-catalog 4.6.0-0.nightly-2021-01-22-123731 True False False 25h operator-lifecycle-manager-packageserver 4.6.0-0.nightly-2021-01-22-123731 True False False 24h service-ca 4.6.0-0.nightly-2021-01-22-123731 True False False 25h storage 4.6.0-0.nightly-2021-01-22-123731 True False False 25h After 24h, the cluster stays a normal satus and good cpu/memory performance, so move the bug VERIFIED.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6.15 bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:0235