Bug 1886627 - Kube-apiserver pods restarting/reinitializing periodically
Summary: Kube-apiserver pods restarting/reinitializing periodically
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-apiserver
Version: 4.6
Hardware: Unspecified
OS: Linux
medium
medium
Target Milestone: ---
: 4.7.0
Assignee: Stefan Schimanski
QA Contact: Ke Wang
URL:
Whiteboard: aos-scalability-46
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-10-09 00:16 UTC by Naga Ravi Chaitanya Elluri
Modified: 2021-02-24 15:24 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-24 15:24:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift library-go pull 916 0 None closed Bug 1886627: operator/installer: don't mark a successful installer with pending revision as failed 2021-02-11 14:36:06 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:24:54 UTC

Description Naga Ravi Chaitanya Elluri 2020-10-09 00:16:53 UTC
Description of problem:
On a cluster built with 4.6.0-rc.0 bits, the kube-apiservers are restarting/reinitializing periodically ( twice in 6 hours ) and looking at the cluster, the kube-apiserver cluster operator is seen to be degraded for a few minutes during that duration because of node-install pod failure. The CPU and memory also spiked up because of the reinitialization ( expected behavior ).

Logs/must-gather can be found here: http://dell-r510-01.perf.lab.eng.rdu2.redhat.com/large-scale/4.6-sdn/bugs/apiserver-reinitializing/

Version-Release number of selected component (if applicable):
4.6.0-rc.0

How reproducible:
Twice on the same cluster over 6 hours.

Steps to Reproduce:
1. Install a cluster using 4.6.0-rc.0 build. 
2. Track the status of kube-apiserver pods over a period of time. We used https://github.com/openshift-scale/cerberus to track/report it.

Actual results:
Kube-apiserver pods are restarting/reinitializing periodically.

Expected results:
Kube-apiserver pods are healthy and are not restarting/reinitializing periodically.

Comment 2 Stefan Schimanski 2020-10-09 09:29:27 UTC
When did you see the restarts? How did you notice them?

Comment 3 Naga Ravi Chaitanya Elluri 2020-10-09 11:05:53 UTC
We noticed the kube-apiserver pods reinitializing one after the other sequentially at 2020-10-08T16:16:39 UTC and 2020-10-08T21:07:47 UTC. It can also be confirmed by looking at the age of the kube-apiserver pods when compared to the cluster but the restarts count is seen to be 0 though.

Comment 5 Stefan Schimanski 2020-10-09 13:52:33 UTC
The kube-apiserver deployment at 9pm was due to cert rotation. That's totally expected hours after installation.

I don't see that it went degraded around 9pm.

Comment 6 Stefan Schimanski 2020-10-09 14:01:42 UTC
I checked both times: 2020-10-08T16:16:39 UTC and 2020-10-08T21:07:47 UTC. As written above I see a new revision due to cert for the latter. I don't see anything around the former: no new revision and no condition changes.

Comment 7 Stefan Schimanski 2020-10-09 14:02:15 UTC
Moving out of blocker list until proven otherwise.

Comment 8 Stefan Schimanski 2020-10-09 14:08:04 UTC
Disregard #6, was in the wrong log file.

For the 16:16 timestamp:

2020-10-08T16:15:25.559671713Z I1008 16:15:25.559580       1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"28b539b0-66c0-4551-9c93-d19a56ad9e82", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 7 triggered by "configmap/kubelet-serving-ca has changed"

and then some minutes later:

2020-10-08T16:21:04.174926102Z I1008 16:21:04.174863       1 status_controller.go:172] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2020-10-08T16:21:04Z","message":"NodeInstallerDegraded: 1 nodes are failing on revision 7:\nNodeInstallerDegraded: ","reason":"NodeInstaller_InstallerPodFailed","status":"True","type":"Degraded"},{"lastTransitionTime":"2020-10-08T16:16:15Z","message":"NodeInstallerProgressing: 2 nodes are at revision 6; 1 nodes are at revision 7; 0 nodes have achieved new revision 8","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-10-07T21:20:08Z","message":"StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 6; 1 nodes are at revision 7; 0 nodes have achieved new revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-10-07T21:17:39Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}

So the installer failed to run. Digging further to find out why.

Comment 9 Stefan Schimanski 2020-10-09 14:41:53 UTC
So, the message "1 nodes are failing on revision 7" is a false positive. The logs reveal that there is a revision 8 pod pending and the revision 7 kube-apiserver pod is not yet ready. We used to mark that revision as failed, and that bubbled up to the condition show the operator degraded.

This confirms this is cosmetics and no 4.6.0 blocker.

Comment 11 Ke Wang 2020-10-21 09:14:36 UTC
Have a cluster uptime over 6 hours.
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.7.0-0.nightly-2020-10-17-034503   True        False         7h6m    Cluster version is 4.7.0-0.nightly-2020-10-17-034503

$ oc get co | grep -v '.True.*False.*False'

$ oc get pods -A | grep -vE 'Running|Completed'

The cluster is well.

Checked if the similar fake message "1 nodes are failing on revision 7" can be found.
$ oc debug node/ip-xx-xx-137-45.us-east-2.compute.internal
sh-4.4# cd /var/log/pods
sh-4.4# grep -nr "2 nodes are at revision.*1 nodes are at revision.*0 nodes have achieved new revision" openshift-*
sh-4.4# grep -nr "1 nodes are failing on revision" openshift-*

Nothing at all from results, so move the bug VERIFIED.

Comment 14 errata-xmlrpc 2021-02-24 15:24:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.