Bug 2110274 - ROOK_CSI_ENABLE_CEPHFS is "false" after upgrading the provider cluster alone to ODF 4.11.0
Summary: ROOK_CSI_ENABLE_CEPHFS is "false" after upgrading the provider cluster alone ...
Keywords:
Status: CLOSED DUPLICATE of bug 2107073
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.10
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Malay Kumar parida
QA Contact: Martin Bukatovic
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-07-25 05:42 UTC by Malay Kumar parida
Modified: 2023-08-09 17:00 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 2107073
Environment:
Last Closed: 2022-07-26 06:11:21 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 1757 0 None open Bug 2110274: [release-4.10] mark enableRookCSICephFS as true always in consumer cluster 2022-07-25 11:49:59 UTC

Description Malay Kumar parida 2022-07-25 05:42:31 UTC
+++ This bug was initially created as a clone of Bug #2107073 +++

+++ This bug was initially created as a clone of Bug #2107023 +++

Description of problem:
When the provider cluster alone is upgraded to ODF 4.11.0 from ODF 4.10.4, CephFS PVCs cannot be created on the consumer cluster because the value of ROOK_CSI_ENABLE_CEPHFS in the consumer configmap 'rook-ceph-operator-config' is set as "false"
The ODF version in consumer is still 4.10.4

From consumer cluster:

$ oc get cm rook-ceph-operator-config -oyaml -nopenshift-storage | grep ROOK_CSI_ENABLE_CEPHFS
  ROOK_CSI_ENABLE_CEPHFS: "false"
        f:ROOK_CSI_ENABLE_CEPHFS: {}


Must gather logs before upgrading provider and consumer cluster from ODF 4.10.4 to 4.11.0-113:

Consumer http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/jijoy-j13-c3/jijoy-j13-c3_20220713T081317/logs/testcases_1657705862/

Provider http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/jijoy-j13-pr/jijoy-j13-pr_20220713T043423/logs/testcases_1657705913/


Must gather logs collected after upgrading the provider cluster to ODF 4.11.0-113:
Consumer http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/jijoy-j13-c3/jijoy-j13-c3_20220713T081317/logs/testcases_1657715380/

Provider http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/jijoy-j13-pr/jijoy-j13-pr_20220713T043423/logs/testcases_1657715387/

==================================================================
Version-Release number of selected component (if applicable):
ODF 4.11.0-113 on provider cluster
ODF 4.10.4 on consumer cluster

OCP 4.10.20
ocs-osd-deployer.v2.0.3

======================================================================
How reproducible:
2/2

Steps to Reproduce:
1. Install provider and consumer cluster with ODF version 4.10.4.
(ocs-osd-deployer.v2.0.3)
2. Upgrade the provider cluster to ODF 4.11.0
3. Try to create CephFS PVC on the consumer cluster
4. Check the value of ROOK_CSI_ENABLE_CEPHFS in consumer cluster
$ oc get cm rook-ceph-operator-config -oyaml -nopenshift-storage | grep ROOK_CSI_ENABLE_CEPHFS


=====================================================================

Actual results:
Step 3. Cannot create CephFS PVC
Step 4. Value of ROOK_CSI_ENABLE_CEPHFS is "false"
$ oc get cm rook-ceph-operator-config -oyaml -nopenshift-storage | grep ROOK_CSI_ENABLE_CEPHFS
  ROOK_CSI_ENABLE_CEPHFS: "false"
        f:ROOK_CSI_ENABLE_CEPHFS: {}

======================================================================

Expected results:
Step 3. CephFS PVC should reach Bound state
Step 4. Value of ROOK_CSI_ENABLE_CEPHFS should be "true"
$ oc get cm rook-ceph-operator-config -oyaml -nopenshift-storage | grep ROOK_CSI_ENABLE_CEPHFS
  ROOK_CSI_ENABLE_CEPHFS: "true"
        f:ROOK_CSI_ENABLE_CEPHFS: {}


Additional info:

--- Additional comment from RHEL Program Management on 2022-07-14 09:29:59 UTC ---

This bug having no release flag set previously, is now set with release flag 'odf‑4.11.0' to '?', and so is being proposed to be fixed at the ODF 4.11.0 release. Note that the 3 Acks (pm_ack, devel_ack, qa_ack), if any previously set while release flag was missing, have now been reset since the Acks are to be set against a release flag.

--- Additional comment from Mudit Agarwal on 2022-07-19 13:49:42 UTC ---

Not a 4.11 blocker

--- Additional comment from Malay Kumar parida on 2022-07-25 04:24:36 UTC ---

This issue was earlier happening with upgrading both the consumer & provider cluster from 4.10 to 4.11, this BZ had the issue https://bugzilla.redhat.com/show_bug.cgi?id=2096823. This was included in 4.11 but not backported to 4.10. We need to backport this to 4.10. But this can't be directly backported to 4.10 as it has a couple of lines from another PR-https://github.com/red-hat-storage/ocs-operator/pull/1663.

So we need to backport this- https://github.com/red-hat-storage/ocs-operator/pull/1663 1st.
Then we have to backport this- https://github.com/red-hat-storage/ocs-operator/pull/1710.

--- Additional comment from Malay Kumar parida on 2022-07-25 04:28:52 UTC ---

After this is complete, The customer has to first upgrade to the latest 4.10 z stream before upgrading to 4.11 versions.

Comment 2 Mudit Agarwal 2022-07-26 06:11:21 UTC

*** This bug has been marked as a duplicate of bug 2107073 ***


Note You need to log in before you can comment on or make changes to this bug.