+++ This bug was initially created as a clone of Bug #2107023 +++ Description of problem: When the provider cluster alone is upgraded to ODF 4.11.0 from ODF 4.10.4, CephFS PVCs cannot be created on the consumer cluster because the value of ROOK_CSI_ENABLE_CEPHFS in the consumer configmap 'rook-ceph-operator-config' is set as "false" The ODF version in consumer is still 4.10.4 From consumer cluster: $ oc get cm rook-ceph-operator-config -oyaml -nopenshift-storage | grep ROOK_CSI_ENABLE_CEPHFS ROOK_CSI_ENABLE_CEPHFS: "false" f:ROOK_CSI_ENABLE_CEPHFS: {} Must gather logs before upgrading provider and consumer cluster from ODF 4.10.4 to 4.11.0-113: Consumer http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/jijoy-j13-c3/jijoy-j13-c3_20220713T081317/logs/testcases_1657705862/ Provider http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/jijoy-j13-pr/jijoy-j13-pr_20220713T043423/logs/testcases_1657705913/ Must gather logs collected after upgrading the provider cluster to ODF 4.11.0-113: Consumer http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/jijoy-j13-c3/jijoy-j13-c3_20220713T081317/logs/testcases_1657715380/ Provider http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/jijoy-j13-pr/jijoy-j13-pr_20220713T043423/logs/testcases_1657715387/ ================================================================== Version-Release number of selected component (if applicable): ODF 4.11.0-113 on provider cluster ODF 4.10.4 on consumer cluster OCP 4.10.20 ocs-osd-deployer.v2.0.3 ====================================================================== How reproducible: 2/2 Steps to Reproduce: 1. Install provider and consumer cluster with ODF version 4.10.4. (ocs-osd-deployer.v2.0.3) 2. Upgrade the provider cluster to ODF 4.11.0 3. Try to create CephFS PVC on the consumer cluster 4. Check the value of ROOK_CSI_ENABLE_CEPHFS in consumer cluster $ oc get cm rook-ceph-operator-config -oyaml -nopenshift-storage | grep ROOK_CSI_ENABLE_CEPHFS ===================================================================== Actual results: Step 3. Cannot create CephFS PVC Step 4. Value of ROOK_CSI_ENABLE_CEPHFS is "false" $ oc get cm rook-ceph-operator-config -oyaml -nopenshift-storage | grep ROOK_CSI_ENABLE_CEPHFS ROOK_CSI_ENABLE_CEPHFS: "false" f:ROOK_CSI_ENABLE_CEPHFS: {} ====================================================================== Expected results: Step 3. CephFS PVC should reach Bound state Step 4. Value of ROOK_CSI_ENABLE_CEPHFS should be "true" $ oc get cm rook-ceph-operator-config -oyaml -nopenshift-storage | grep ROOK_CSI_ENABLE_CEPHFS ROOK_CSI_ENABLE_CEPHFS: "true" f:ROOK_CSI_ENABLE_CEPHFS: {} Additional info:
Not a 4.11 blocker
This issue was earlier happening with upgrading both the consumer & provider cluster from 4.10 to 4.11, this BZ had the issue https://bugzilla.redhat.com/show_bug.cgi?id=2096823. This was included in 4.11 but not backported to 4.10. We need to backport this to 4.10. But this can't be directly backported to 4.10 as it has a couple of lines from another PR-https://github.com/red-hat-storage/ocs-operator/pull/1663. So we need to backport this- https://github.com/red-hat-storage/ocs-operator/pull/1663 1st. Then we have to backport this- https://github.com/red-hat-storage/ocs-operator/pull/1710.
After this is complete, The customer has to first upgrade to the latest 4.10 z stream before upgrading to 4.11 versions.
Update- Automated cherry pick was not possible due to further merge conflicts, So manually backporting the required changes.
We just need the fix in 4.10.z, it is already fixed in 4.11 via BZ #2096823
*** Bug 2110274 has been marked as a duplicate of this bug. ***
*** Bug 2107023 has been marked as a duplicate of this bug. ***
Yes, I think it's good enough.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Data Foundation 4.10.6 Bug Fix Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:6675