Description of problem: OCP conformance test "[sig-arch] Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent [Suite:openshift/conformance/parallel]" is failing on Red Hat OpenShift on IBM Cloud (ROKS). Version-Release number of selected component (if applicable): 4.8.2 How reproducible: Always Steps to Reproduce: 1. Run OCP conformance test on a ROKS version 4.8.2 cluster. Actual results: fail [github.com/onsi/ginkgo.0-origin.0+incompatible/internal/leafnodes/runner.go:113]: Aug 2 16:49:59.814: Daemonsets found that do not meet platform requirements for update strategy: expected daemonset openshift-kube-proxy/openshift-kube-proxy to have maxUnavailable 10% or 33% (see comment) instead of 1 Expected results: Pass Additional info: ROKS version 4.8.2 is in development at the moment and not publicly available.
apiVersion: apps/v1 kind: DaemonSet metadata: annotations: deprecated.daemonset.template.generation: "2" kubernetes.io/description: | This daemonset is the kubernetes service proxy (kube-proxy). release.openshift.io/version: 4.8.2 creationTimestamp: "2021-08-03T12:32:39Z" generation: 2 name: openshift-kube-proxy namespace: openshift-kube-proxy ownerReferences: - apiVersion: operator.openshift.io/v1 blockOwnerDeletion: true controller: true kind: Network name: cluster uid: ef68884d-6ff3-45c7-a23e-d1e791c30882 resourceVersion: "91160" uid: 74949715-7a00-49e7-beda-8e52e3bb1a57 spec: revisionHistoryLimit: 10 selector: matchLabels: app: kube-proxy template: metadata: annotations: target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}' creationTimestamp: null labels: app: kube-proxy component: network openshift.io/component: network type: infra spec: containers: ... updateStrategy: rollingUpdate: maxSurge: 0 maxUnavailable: 1 type: RollingUpdate status: currentNumberScheduled: 3 desiredNumberScheduled: 3 numberAvailable: 3 numberMisscheduled: 0 numberReady: 3 observedGeneration: 2 updatedNumberScheduled: 3
Hi, are there any updates?
Bug got lost in the stack. This bug pops up frequently and I should be able to take care of this with a backport @rtheis.com
Thank you.
*** This bug has been marked as a duplicate of bug 2029590 ***