Bug 1861899
| Summary: | kube-apiserver degraded: 1 nodes are failing on revision 6 | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Cameron Meadors <cmeadors> |
| Component: | kube-apiserver | Assignee: | Luis Sanchez <sanchezl> |
| Status: | CLOSED DUPLICATE | QA Contact: | Ke Wang <kewang> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 4.6 | CC: | aos-bugs, mfojtik, sttts, xxia |
| Target Milestone: | --- | ||
| Target Release: | 4.6.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-08-26 16:18:15 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Cameron Meadors
2020-07-29 20:27:09 UTC
Working on getting and attaching logs. logs can be found here: http://file.bos.redhat.com/cmeadors/must-gather.local.7046820645787427138.tgz Looks like kube-apiserver sorted itself out. It is not degraded after letting it StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 11 Was there a code change that could have gotten picked up with automatic updates? Possible perfect storm. AWS seemed to be causing issues with getting logs. Logs from the time period of the issue seem to be lost. I suspect must-gather logs will be incomplete as well. Suspected AWS issue went away. No one else that installed that nightly reported any issues with kube-apiserver being degraded. No real reproducer. I have provided everything I can. I am not going to save this install, but I will look for the issue on other nightlies. *** This bug has been marked as a duplicate of bug 1858763 *** |