Bug 1861899 - kube-apiserver degraded: 1 nodes are failing on revision 6
Summary: kube-apiserver degraded: 1 nodes are failing on revision 6
Keywords:
Status: CLOSED DUPLICATE of bug 1858763
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-apiserver
Version: 4.6
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.6.0
Assignee: Luis Sanchez
QA Contact: Ke Wang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-29 20:27 UTC by Cameron Meadors
Modified: 2020-08-26 16:18 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-08-26 16:18:15 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Cameron Meadors 2020-07-29 20:27:09 UTC
Description of problem:

Cluster is degraded because of kube-apiserver.  This also seems to cause 'oc logs pod' to return "You must be logged in"


Version-Release number of selected component (if applicable):

4.6.0-0.nightly-2020-07-25-091217

How reproducible:

Only installed once so not sure.  Will try to install again.

Steps to Reproduce:
1.installed using Flexxy with ipi-on-aws
2.Check cluster status
3.

Actual results:
kube-apiserver degraded

NodeInstallerDegraded: 1 nodes are failing on revision 6:
NodeInstallerDegraded: pods "installer-6-ip-10-0-137-104.us-east-2.compute.internal" not found

Expected results:

Cluster in good state


Additional info:

I was in the middle of testing the selinux change to enable katacontainers.  I did not notice the degraded state until I ran into issues with my testing.  Since I can't get logs, I am not sure if it is related.

Comment 1 Cameron Meadors 2020-07-29 20:27:43 UTC
Working on getting and attaching logs.

Comment 2 Cameron Meadors 2020-07-29 20:46:53 UTC
logs can be found here:

http://file.bos.redhat.com/cmeadors/must-gather.local.7046820645787427138.tgz

Comment 3 Cameron Meadors 2020-07-30 13:52:27 UTC
Looks like kube-apiserver sorted itself out.  It is not degraded after letting it

StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 11

Was there a code change that could have gotten picked up with automatic updates?

Comment 4 Cameron Meadors 2020-07-30 14:50:08 UTC
Possible perfect storm.  AWS seemed to be causing issues with getting logs.  Logs from the time period of the issue seem to be lost.  I suspect must-gather logs will be incomplete as well.  Suspected AWS issue went away.  No one else that installed that nightly reported any issues with kube-apiserver being degraded.

No real reproducer.  I have provided everything I can.  I am not going to save this install, but I will look for the issue on other nightlies.

Comment 6 Stefan Schimanski 2020-08-26 16:18:15 UTC

*** This bug has been marked as a duplicate of bug 1858763 ***


Note You need to log in before you can comment on or make changes to this bug.