Description of problem: Cluster is degraded because of kube-apiserver. This also seems to cause 'oc logs pod' to return "You must be logged in" Version-Release number of selected component (if applicable): 4.6.0-0.nightly-2020-07-25-091217 How reproducible: Only installed once so not sure. Will try to install again. Steps to Reproduce: 1.installed using Flexxy with ipi-on-aws 2.Check cluster status 3. Actual results: kube-apiserver degraded NodeInstallerDegraded: 1 nodes are failing on revision 6: NodeInstallerDegraded: pods "installer-6-ip-10-0-137-104.us-east-2.compute.internal" not found Expected results: Cluster in good state Additional info: I was in the middle of testing the selinux change to enable katacontainers. I did not notice the degraded state until I ran into issues with my testing. Since I can't get logs, I am not sure if it is related.
Working on getting and attaching logs.
logs can be found here: http://file.bos.redhat.com/cmeadors/must-gather.local.7046820645787427138.tgz
Looks like kube-apiserver sorted itself out. It is not degraded after letting it StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 11 Was there a code change that could have gotten picked up with automatic updates?
Possible perfect storm. AWS seemed to be causing issues with getting logs. Logs from the time period of the issue seem to be lost. I suspect must-gather logs will be incomplete as well. Suspected AWS issue went away. No one else that installed that nightly reported any issues with kube-apiserver being degraded. No real reproducer. I have provided everything I can. I am not going to save this install, but I will look for the issue on other nightlies.
*** This bug has been marked as a duplicate of bug 1858763 ***