Description of problem: The test "[sig-arch] Managed cluster should have no crashlooping pods in core namespaces over four minutes [Suite:openshift/conformance/parallel]" fails on 4.8 CI jobs for s390x. It has been observed to fail over the CI runs: 1. 04/20 - https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-remote-libvirt-s390x-4.8/1384295824116158464 2. 04/15 - https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-remote-libvirt-s390x-4.8/1382484001675022336 3. 04/12 - https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-remote-libvirt-s390x-4.8/1381577904302854144 Version-Release number of selected component (if applicable): 4.8 s390x How reproducible: please, refer to the CI jobs in the above links Steps to Reproduce: please, refer to the CI jobs in the above links Actual results: the test fails on 4.8 CI jobs for s390x Expected results: the test should pass in CI e.g: https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-remote-libvirt-s390x-4.8/1382846217062453248 Additional info:
The error is "Pod openshift-marketplace/certified-operators-ptnxm is not healthy: container registry-server exited with non-zero exit code" How often does this error occur? Is it a flake, or are all s390x builds broken? Setting to low priority for now.
https://search.ci.openshift.org/?search=container+registry-server+exited+with+non-zero+exit+code&maxAge=168h&context=0&type=bug%2Bjunit&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job Flaky for s390x
Looks like this is duplicate of #1949991 Please re-open the bug if you think there's something different to consider for this bz. *** This bug has been marked as a duplicate of bug 1949991 ***
following on the bug 1949991 for resolution.