Bug 1726802
Summary: | Couldn't delete ns: namespace svcaccounts was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Jeff Cantrill <jcantril> |
Component: | kube-controller-manager | Assignee: | Mike Dame <mdame> |
Status: | CLOSED DUPLICATE | QA Contact: | Xingxing Xia <xxia> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.2.0 | CC: | aos-bugs, bparees, hongkliu, maszulik, mfojtik, tnozicka, wking |
Target Milestone: | --- | ||
Target Release: | 4.2.z | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | buildcop | ||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-08-16 13:40:57 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Jeff Cantrill
2019-07-03 19:19:59 UTC
https://prow.k8s.io/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-4.2/1525 https://prow.k8s.io/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-4.2/1532 Saw similar log: Jul 26 12:02:12.799: INFO: Couldn't delete ns: "e2e-svcaccounts-2097": namespace e2e-svcaccounts-2097 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-svcaccounts-2097 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"}) https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-4.2/2512#0:build-log.txt%3A18770 I think this is slow pod deletion and we backoff when we shouldn't (I have a PR upstream to fix that). Mike, can you check the logs to see if there is a pod that took too long to delete and send it to Node team to find out why? Is this a dup of bug 1713135? To me this looks more like dupe of https://bugzilla.redhat.com/show_bug.cgi?id=1727090, I'm closing this in favour of the other one, since there's already an investigation happening. *** This bug has been marked as a duplicate of bug 1727090 *** |