Bug 1749075
| Summary: | failed to send heartbeat for resource "8ccc9a07-ed5d-4584-b845-3773bc5da3ff": | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Jesus M. Rodriguez <jesusr> |
| Component: | Test Infrastructure | Assignee: | Steve Kuznetsov <skuznets> |
| Status: | CLOSED ERRATA | QA Contact: | |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.2.0 | CC: | sponnaga, wking, wsun |
| Target Milestone: | --- | ||
| Target Release: | 4.2.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-10-16 06:40:32 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Jesus M. Rodriguez
2019-09-04 19:47:26 UTC
Brackets on these symptoms: $ curl -s 'https://ci-search-ci-search-next.svc.ci.openshift.org/search?name=-e2e-&maxAge=24h&context=0&search=Container+lease+in+pod+.*+failed' | jq -r '. | to_entries[].value | to_entries[].value[].context[]' | sort 2019/09/04 18:11:43 Container lease in pod e2e-aws-scaleup-rhel7 failed, exit code 1, reason Error 2019/09/04 18:11:50 Container lease in pod e2e-aws-proxy failed, exit code 1, reason Error ... 2019/09/04 19:48:20 Container lease in pod e2e-aws-upgrade failed, exit code 1, reason Error 2019/09/04 20:04:35 Container lease in pod e2e-aws-upgrade failed, exit code 1, reason Error 2019/09/04 21:51:46 Container lease in pod e2e-cmd failed, exit code 1, reason Error This is a test-cluster thing, so I think we just need to wait and see how clean we are in CI. From [1], the most recent occurrence is 44 hours ago with a "no route to host" [2]. That's clean enough for VERIFIED to me, and we can always reopen if it flares up again in CI. Since this doesn't affect OpenShift customers, I'm going to mark it VERIFIED myself, but anyone who disagrees is free to reopen :). [1]: https://ci-search-ci-search-next.svc.ci.openshift.org/?search=failed%20to%20send%20heartbeat%20for%20resource [2]: https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade-rollback-4.1-to-4.2/159 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922 |