Bug 1811855 - Container setup test failing
Summary: Container setup test failing
Keywords:
Status: CLOSED DUPLICATE of bug 1811530
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 4.4
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Eric Duen
QA Contact: David Sanz
URL:
Whiteboard: buildcop
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-10 00:00 UTC by IgorKarpukhin
Modified: 2020-03-11 13:27 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-11 13:27:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description IgorKarpukhin 2020-03-10 00:00:22 UTC
Container setup test failing since 03.06.2020

https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-openstack-4.4/1218

Comment 1 Ben Parees 2020-03-10 17:54:06 UTC
This job has been consistently broken since march 6th and was pretty unstable before that, but it is claimed to be one of our "release informing" jobs.  It needs to be fixed or we need to drop it from our release informing jobs if there is a good reason it is not going to be fixed.

as seen here https://storage.googleapis.com/origin-ci-test/logs/release-openshift-ocp-installer-e2e-openstack-4.4/1218/build-log.txt
(from this run: https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-openstack-4.4/1218)

the installer reported a number of operators that failed to reach the available condition:

level=error msg="Cluster operator authentication Degraded is True with IngressStateEndpoints_MissingSubsets::RouterCerts_NoRouterCertSecret: RouterCertsDegraded: secret/v4-0-config-system-router-certs -n openshift-authentication: could not be retrieved: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"
level=info msg="Cluster operator authentication Progressing is Unknown with NoData: "
level=info msg="Cluster operator authentication Available is Unknown with NoData: "
level=error msg="Cluster operator kube-apiserver Degraded is True with StaticPods_Error: StaticPodsDegraded: nodes/jm35l22p-1a357-db252-master-1 pods/kube-apiserver-jm35l22p-1a357-db252-master-1 container=\"kube-apiserver\" is not ready\nStaticPodsDegraded: nodes/jm35l22p-1a357-db252-master-1 pods/kube-apiserver-jm35l22p-1a357-db252-master-1 container=\"kube-apiserver\" is waiting: \"CrashLoopBackOff\" - \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-jm35l22p-1a357-db252-master-1_openshift-kube-apiserver(5b16dd30b1b4683313b212a3a543b11a)\"\nStaticPodsDegraded: nodes/jm35l22p-1a357-db252-master-1 pods/kube-apiserver-jm35l22p-1a357-db252-master-1 container=\"kube-apiserver-cert-regeneration-controller\" is not ready\nStaticPodsDegraded: nodes/jm35l22p-1a357-db252-master-1 pods/kube-apiserver-jm35l22p-1a357-db252-master-1 container=\"kube-apiserver-cert-regeneration-controller\" is waiting: \"CrashLoopBackOff\" - \"back-off 5m0s restarting failed container=kube-apiserver-cert-regeneration-controller pod=kube-apiserver-jm35l22p-1a357-db252-master-1_openshift-kube-apiserver(5b16dd30b1b4683313b212a3a543b11a)\"\nStaticPodsDegraded: pods \"kube-apiserver-jm35l22p-1a357-db252-master-2\" not found\nStaticPodsDegraded: pods \"kube-apiserver-jm35l22p-1a357-db252-master-0\" not found"
level=info msg="Cluster operator kube-apiserver Progressing is True with NodeInstaller: NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2"
level=info msg="Cluster operator kube-apiserver Available is False with StaticPods_ZeroNodesActive: StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2"
level=error msg="Cluster operator kube-controller-manager Degraded is True with StaticPods_Error: StaticPodsDegraded: nodes/jm35l22p-1a357-db252-master-1 pods/kube-controller-manager-jm35l22p-1a357-db252-master-1 container=\"kube-controller-manager-recovery-controller\" is not ready\nStaticPodsDegraded: nodes/jm35l22p-1a357-db252-master-1 pods/kube-controller-manager-jm35l22p-1a357-db252-master-1 container=\"kube-controller-manager-recovery-controller\" is waiting: \"CrashLoopBackOff\" - \"back-off 5m0s restarting failed container=kube-controller-manager-recovery-controller pod=kube-controller-manager-jm35l22p-1a357-db252-master-1_openshift-kube-controller-manager(683033a25e54c090b4af63006f7368e1)\"\nStaticPodsDegraded: nodes/jm35l22p-1a357-db252-master-2 pods/kube-controller-manager-jm35l22p-1a357-db252-master-2 container=\"kube-controller-manager-recovery-controller\" is not ready\nStaticPodsDegraded: nodes/jm35l22p-1a357-db252-master-2 pods/kube-controller-manager-jm35l22p-1a357-db252-master-2 container=\"kube-controller-manager-recovery-controller\" is waiting: \"CrashLoopBackOff\" - \"back-off 5m0s restarting failed container=kube-controller-manager-recovery-controller pod=kube-controller-manager-jm35l22p-1a357-db252-master-2_openshift-kube-controller-manager(683033a25e54c090b4af63006f7368e1)\"\nStaticPodsDegraded: nodes/jm35l22p-1a357-db252-master-0 pods/kube-controller-manager-jm35l22p-1a357-db252-master-0 container=\"kube-controller-manager-recovery-controller\" is not ready\nStaticPodsDegraded: nodes/jm35l22p-1a357-db252-master-0 pods/kube-controller-manager-jm35l22p-1a357-db252-master-0 container=\"kube-controller-manager-recovery-controller\" is waiting: \"CrashLoopBackOff\" - \"back-off 5m0s restarting failed container=kube-controller-manager-recovery-controller pod=kube-controller-manager-jm35l22p-1a357-db252-master-0_openshift-kube-controller-manager(683033a25e54c090b4af63006f7368e1)\""
level=info msg="Cluster operator kube-storage-version-migrator Available is False with _NoMigratorPod: Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available"
level=info msg="Cluster operator openshift-apiserver Available is False with APIServices_PreconditionNotReady: APIServicesAvailable: PreconditionNotReady"
level=info msg="Cluster operator openshift-controller-manager Progressing is True with _DesiredStateNotYetAchieved: Progressing: daemonset/controller-manager: observed generation is 0, desired generation is 8.\nProgressing: daemonset/controller-manager: number available is 0, desired number available > 1"
level=info msg="Cluster operator openshift-controller-manager Available is False with _NoPodsAvailable: Available: no daemon pods available on any node."
level=info msg="Cluster operator operator-lifecycle-manager-packageserver Available is False with : "
level=info msg="Cluster operator operator-lifecycle-manager-packageserver Progressing is True with : Working toward 0.14.1"

So i'm torn between leaving this on installer, and asking individual teams to go investigate why their operator did not become ready.  For now i'll defer to the install team to spawn additional bugs if they see fit.

Comment 2 Martin André 2020-03-11 13:27:34 UTC
Hi Ben, some recent changes in the etcd-operator caused BM, OpenStack, ovirt and vsphere platforms to break over the week end. This should now be fixed with https://github.com/openshift/cluster-kube-apiserver-operator/pull/791. The jobs are still red, because this hasn't yet propagated to the 4.4 release image.

Regarding the stability of the job itself, it's also severely impacted by https://bugzilla.redhat.com/show_bug.cgi?id=1794714 that frequently causes the Network Granular checks to fail.

Closing this as a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1811530.

*** This bug has been marked as a duplicate of bug 1811530 ***


Note You need to log in before you can comment on or make changes to this bug.