Description of problem: ----------------------- bridge-marker and kube-cni-linux-bridge-plugin pods are not available in ARM cluster Version-Release number of selected component (if applicable): ------------------------------------------------------------- CNV 4.14 - v4.14.0.rhel9-1911 OCP 4.14.0-ec.4 in ARM cluster How reproducible: ----------------- Always Steps to Reproduce: -------------------- 1. Install CNV(OpenShift Virtualization) operator with ARM cluster 2. Create 'hyperconverged' CR 3. Check for the pods 'bridge-marker' and 'kube-cni-linux-bridge-plugin' pod in openshift-cnv namespace Actual results: --------------- 'bridge-marker' and 'kube-cni-linux-bridge-plugin' pods are missing Expected results: ----------------- 'bridge-marker' and 'kube-cni-linux-bridge-plugin' pod should be available
As the pods are unavailable because of this issue marking this bug as a blocker
Thanks for reporting this. IIUIC we need to drop our arch selectors and replace them with a simple os=linux selector on our daemon sets. Or, please post the needed PRs to CNAO (and our components if we pull these manifests from them). Once that's done, we need to backport those PRs to 4.14.
Linked PR Note that the placement that was changed is the default placement, in case HCO has another placement, CNAO's placement won't be used so it can act as a workaroud. Also need to make sure of course that the placement that is used doesn't have kubernetes.io/arch=amd64 in case it does.
backported https://github.com/kubevirt/cluster-network-addons-operator/pull/1605
The fix is now available in the latest build. Satheesaran, would you please test it for us and mark it as VERIFIED if it helps? We don't have ARM clusters, so we cannot verify that the bug is fixed. There is no special action needed from our QE, for x86 this BZ will be verified by the next run of regression tests.
(In reply to Petr Horáček from comment #8) > The fix is now available in the latest build. > > Satheesaran, would you please test it for us and mark it as VERIFIED if it > helps? > > We don't have ARM clusters, so we cannot verify that the bug is fixed. There > is no special action needed from our QE, for x86 this BZ will be verified by > the next run of regression tests. Sure. I can verify this. Verified with: OCP - 4.14.0-rc.1 CNV - v4.14.0.rhel9-1981 Here are the steps performed to verify this bug. 1. Installed ARM cluster, deployed CNV operator and created HyperConverged CR 2. bridge-marker and kube-cni-linux-bridge-plugin pods are available post CNV deployment and HyperConverged CR creation [cloud-user@ocp-psi-executor ~]$ oc get pods -n openshift-cnv | grep -E "bridge-marker|kube-cni" bridge-marker-g5c2m 1/1 Running 0 7h33m bridge-marker-h5sgj 1/1 Running 0 7h33m bridge-marker-jnkgn 1/1 Running 0 7h33m bridge-marker-pqrl2 1/1 Running 0 7h33m bridge-marker-qpthz 1/1 Running 0 7h33m bridge-marker-sfsl9 1/1 Running 0 7h33m kube-cni-linux-bridge-plugin-d4jcn 1/1 Running 0 7h33m kube-cni-linux-bridge-plugin-f5hq8 1/1 Running 0 7h33m kube-cni-linux-bridge-plugin-h5x89 1/1 Running 0 7h33m kube-cni-linux-bridge-plugin-h7brv 1/1 Running 0 7h33m kube-cni-linux-bridge-plugin-lw9p6 1/1 Running 0 7h33m kube-cni-linux-bridge-plugin-px85x 1/1 Running 0 7h33m 3. Check for nodeSelector for bridge-marker and kube-cni-linux-bridge-plugin [cloud-user@ ~]$ oc get daemonset bridge-marker -n openshift-cnv -ojson | jq '.spec.template.spec.nodeSelector' { "kubernetes.io/os": "linux" } [cloud-user@ ~]$ oc get daemonset kube-cni-linux-bridge-plugin -n openshift-cnv -ojson | jq '.spec.template.spec.nodeSelector' { "kubernetes.io/os": "linux" } 4. Verify the bridge-marker and kube-cni-linux-bridge-plugin pods are available as a daemonset [cloud-user@ ~]$ oc get daemonset -n openshift-cnv NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE bridge-marker 6 6 6 6 6 kubernetes.io/os=linux 7h29m hostpath-provisioner-csi 3 3 3 3 3 kubernetes.io/os=linux 7h28m kube-cni-linux-bridge-plugin 6 6 6 6 6 kubernetes.io/os=linux 7h29m virt-handler 3 3 3 3 3 kubernetes.io/os=linux 7h29m bridge-marker and kube-cni-linux-bridge-plugin pods are available With this observation, marking this bug as VERIFIED
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Virtualization 4.14.0 Images security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:6817