Bug 1997057
Summary: | Azure upgrade test failing due to [sig-api-machinery] Kubernetes APIs remain available for new connections | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Sinny Kumari <skumari> |
Component: | kube-apiserver | Assignee: | Stefan Schimanski <sttts> |
Status: | CLOSED DUPLICATE | QA Contact: | Xingxing Xia <xxia> |
Severity: | unspecified | Docs Contact: | |
Priority: | high | ||
Version: | 4.9 | CC: | aos-bugs, mfojtik, wking, xxia |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | tag-ci | ||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-08-25 10:37:54 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Sinny Kumari
2021-08-24 10:33:13 UTC
possibly a dup of bug 1955333? Certainly in the same Azure + Kube-reachability space. e2e-agnostic-* jobs could run on any platform. But for the MCO, they're currently Azure [1]. And Kube-reachability issues are often platform-specific, involving pod-restart logic vs. platform-specific load balancer implementation. So tweaking the title here to include "Azure". [1]: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_machine-config-operator/2704/pull-ci-openshift-machine-config-operator-master-e2e-agnostic-upgrade/1429824441595990016#1:build-log.txt%3A19 Setting priority to high because upgrade job is blocking on MCO PRs and as a result most of the PRs are not getting merged. (In reply to W. Trevor King from comment #2) > e2e-agnostic-* jobs could run on any platform. But for the MCO, they're > currently Azure [1]. And Kube-reachability issues are often > platform-specific, involving pod-restart logic vs. platform-specific load > balancer implementation. So tweaking the title here to include "Azure". Not sure if this is just azure specific issue. I later on started an upgrade test on gcp https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_machine-config-operator/2722/pull-ci-openshift-machine-config-operator-master-e2e-gcp-upgrade/1430172245858193408 where these tests failed too. Duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1845414. Until there are very specific new insights about the root causes, there is no value in new BZs. There are a thousand different reasons why the API can go unavailable for some time, in many components like kube-apiserver itself, but also node, MCO, cri-o and the cloud infra. I don't see a triage attempt in this BZ to point to one of those. *** This bug has been marked as a duplicate of bug 1845414 *** |