Description of problem (please be detailed as possible and provide log snippests): When MCG standalone cluster is upgraded from ODF 4.16 to ODF 4.17, noobaa pods can not be scheduled and upgrade does not complete. Version of all relevant components (if applicable): Pre upgrade: 4.16.2 Post Upgrade: 4.17.0-106 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Yes Is there any workaround available to the best of your knowledge? No Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? Yes Can this issue reproduce from the UI? N/A If this is a regression, please provide more details to justify this: yes Steps to Reproduce: 1. Deploy OCS without adding `cluster.ocs.openshift.io/openshift-storage: ''` labels 2. Deploy Stand-alone MCG(PODs will be in pending state due to https://bugzilla.redhat.com/show_bug.cgi?id=2314432) 3. Remove nodeAffinity from noobaa CRD 4. Wait for all noobaa PODs to come into running state 5. Upgrade cluster to 4.17 Actual results: Noobaa Operator is stuck in Installing mode and PODs are in CLBO mode cephcsi-operator.v4.17.0-106.stable CephCSI operator 4.17.0-106.stable Succeeded mcg-operator.v4.17.0-106.stable NooBaa Operator 4.17.0-106.stable mcg-operator.v4.16.2-rhodf Installing ocs-client-operator.v4.17.0-106.stable OpenShift Data Foundation Client 4.17.0-106.stable ocs-client-operator.v4.16.2-rhodf Succeeded ocs-operator.v4.17.0-106.stable OpenShift Container Storage 4.17.0-106.stable ocs-operator.v4.16.2-rhodf Succeeded odf-csi-addons-operator.v4.17.0-106.stable CSI Addons 4.17.0-106.stable odf-csi-addons-operator.v4.16.2-rhodf Succeeded odf-operator.v4.17.0-106.stable OpenShift Data Foundation 4.17.0-106.stable odf-operator.v4.16.2-rhodf Succeeded odf-prometheus-operator.v4.17.0-106.stable Prometheus Operator 4.17.0-106.stable odf-prometheus-operator.v4.16.2-rhodf Succeeded recipe.v4.17.0-106.stable Recipe 4.17.0-106.stable recipe.v4.16.2-rhodf Succeeded rook-ceph-operator.v4.17.0-106.stable Rook-Ceph 4.17.0-106.stable rook-ceph-operator.v4.16.2-rhodf Succeeded ➜ ~ oc get pods | grep noobaa noobaa-core-0 2/2 Running 0 4m51s noobaa-db-pg-0 1/1 Running 0 5m20s noobaa-default-backing-store-noobaa-pod-be2e916f 0/1 CrashLoopBackOff 3 (51s ago) 13m noobaa-endpoint-6cc7f54c5f-98zhp 1/1 Running 0 5m21s noobaa-operator-5d7745f8f6-k8vp4 0/1 CrashLoopBackOff 4 (26s ago) 5m1s Expected results: Noobaa Operator should upgrade successfully and all POD should be in running state Additional info:
Please update the RDT flag/text appropriately.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.17.0 Security, Enhancement, & Bug Fix Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:8676
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days