Bug 2096823 - After upgrading the cluster from ODF4.10 to ODF4.11, the ROOK_CSI_ENABLE_CEPHFS move to False
Summary: After upgrading the cluster from ODF4.10 to ODF4.11, the ROOK_CSI_ENABLE_CEPH...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.11.0
Assignee: Madhu Rajanna
QA Contact: Neha Berry
URL:
Whiteboard:
Depends On: 2096818
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-06-14 11:35 UTC by Madhu Rajanna
Modified: 2023-08-09 17:00 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of: 2096818
Environment:
Last Closed: 2022-08-24 13:54:31 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 1723 0 None open BUG 2096823: Enable cephfs in consumer mode 2022-06-20 07:42:44 UTC
Red Hat Product Errata RHSA-2022:6156 0 None None None 2022-08-24 13:55:03 UTC

Internal Links: 2096818

Description Madhu Rajanna 2022-06-14 11:35:28 UTC
+++ This bug was initially created as a clone of Bug #2096818 +++

Description of problem:
After upgrading the cluster from ODF4.10 to ODF4.11, the ROOK_CSI_ENABLE_CEPHFS move to False

Version-Release number of selected component (if applicable):
Provider:
OCP Version:4.10.16
ODF Version:4.11.0-91

Consumer: 
OCP Version:4.10.16
ODF Version:4.11.0-91


How reproducible:


Steps to Reproduce:
1.Install OCP4.10 on Provider cluster

2.Install ODF4.10 on Provider cluster

3.Upgrade ODF4.10 to ODF4.11 on Provider cluster

4.Install OCP4.10 on Consumer cluster

5.Install ODF4.10 on Consumer cluster

6.Upgrade ODF4.10 to ODF4.11 on Consumer cluster

7.Create new pvc [storage class = cephrbd] -> pvc moved to Bound state

8.Create new pvc [storage class = ceph_fs] -> pvc stuck on Pending state
$ oc describe pvc -n test-project
Events:
  Type    Reason                Age                   From                         Message
  ----    ------                ----                  ----                         -------
  Normal  ExternalProvisioning  13s (x14 over 3m21s)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "openshift-storage.cephfs.csi.ceph.com" or manually created by system administrator

9.Check rook-ceph-operator-config config-map on consumer cluster:
$ oc get cm rook-ceph-operator-config -oyaml -nopenshift-storage | grep ROOK_CSI_ENABLE_CEPHFS
  ROOK_CSI_ENABLE_CEPHFS: "false"

Actual results:
ROOK_CSI_ENABLE_CEPHFS=False

Expected results:
ROOK_CSI_ENABLE_CEPHFS=True

Additional info:

Comment 7 Oded 2022-06-29 11:55:30 UTC
Bug fixed

SetUp Provider:
OCP Version: 4.10.18
ODF Version: 4.11.0-104

SetUp Consumer:
OCP Version: 4.10.18
ODF Version: 4.11.0-105


Test Process:
1. Upgrade provider from ODF4.10 to ODF4.11

2. Upgrade consumer from ODF4.10 to ODF4.11

3.Create Ceph-FS and Ceph-rbd pvc on consumer cluster:
$ oc get pvc -n oded-test
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
oded-fs    Bound    pvc-464b34f4-96e5-4e18-9965-3ee73c135e5c   1Gi        RWO            ocs-storagecluster-cephfs     37s
oded-rbd   Bound    pvc-41366b9d-90b0-46c1-8d28-bd846010b4fd   1Gi        RWO            ocs-storagecluster-ceph-rbd   24s

Comment 10 errata-xmlrpc 2022-08-24 13:54:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:6156


Note You need to log in before you can comment on or make changes to this bug.