Bug 2213183 - [Stretch cluster] Add capacity should not show "thin-csi" storage class in storageClass dropdown for LSO stretch cluster
Summary: [Stretch cluster] Add capacity should not show "thin-csi" storage class in st...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: management-console
Version: 4.13
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.14.0
Assignee: Bipul Adhikari
QA Contact: Mahesh Shetty
URL:
Whiteboard:
Depends On:
Blocks: 2244409
TreeView+ depends on / blocked
 
Reported: 2023-06-07 10:43 UTC by Joy John Pinto
Modified: 2023-11-08 18:52 UTC (History)
7 users (show)

Fixed In Version: 4.14.0-136
Doc Type: Bug Fix
Doc Text:
Previously, the add capacity operation used to fail when moving from LSO to default storage classes because the persistent volumes (PVs) for expansion were not created correctly. With this fix, the add capacity operation using a non-LSO storage class is not allowed when a storage cluster is initially created using a LSO based storage class.
Clone Of:
Environment:
Last Closed: 2023-11-08 18:51:19 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage odf-console pull 1049 0 None open fix StorageClass filtering for add capacity modal 2023-09-19 13:46:11 UTC
Github red-hat-storage odf-console pull 1056 0 None open Bug 2213183: [release-4.14] fix StorageClass filtering for add capacity modal 2023-09-20 16:59:31 UTC
Github red-hat-storage odf-console pull 1057 0 None open Bug 2213183: [release-4.14-compatibility] fix StorageClass filtering for add capacity modal 2023-09-20 16:59:26 UTC
Github red-hat-storage odf-console pull 942 0 None Merged Bug 2213183: Block users to switch StorageClass arbitrarily 2023-07-31 02:03:58 UTC
Red Hat Product Errata RHSA-2023:6832 0 None None None 2023-11-08 18:52:27 UTC

Description Joy John Pinto 2023-06-07 10:43:37 UTC
Created attachment 1969504 [details]
Thin csi under storageClass dropdown

Description of problem (please be detailed as possible and provide log
snippests):
[Stretch cluster] Add capacity should not show "thin-csi" storage class in dropdown for LSO stretch cluster

Version of all relevant components (if applicable):
OCP 4.13.0-0.nightly-2023-06-05-212836
ODF 4.13.0-207

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
NA

Is there any workaround available to the best of your knowledge?
NA

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
Yes

Can this issue reproduce from the UI?
Yes

If this is a regression, please provide more details to justify this:
NA

Steps to Reproduce:
1. Install ocp cluster 
2. Add disk in vsphere and install local storage operator
3. Install ODf and Create storage system using local storage and stretch mode enabled
4. Through openshift UI try to Add capacity
5. In storageClass dropdown "thin-csi" storageClass is seen which should not be the case as it is LSO deployed stretch cluster

Actual results:
In storageClass dropdown "thin-csi" storageClass is seen 

Expected results:
During add capacity, In storageClass dropdown "thin-csi" storageClass should not be seen as it is LSO deployed stretch cluster 

Additional info:
This observation was noted while analyzing https://bugzilla.redhat.com/show_bug.cgi?id=2209012

Refer attachment thin_csi_add_capacity.png

Comment 2 Bipul Adhikari 2023-06-07 12:57:58 UTC
We need the queries metioned in https://bugzilla.redhat.com/show_bug.cgi?id=2209012#c27 replied before we can proceed with this.

Comment 3 Joy John Pinto 2023-06-16 07:30:48 UTC
The queries are answered in https://bugzilla.redhat.com/show_bug.cgi?id=2209012#c30, Similar behaviour is seen on non stretch LSO cluster upon trying to add capacity through thin-csi storage class.

Comment 4 Bipul Adhikari 2023-07-28 08:02:30 UTC
Based on email exchanges with the PM. We will block moving from no-prov storage class to a prov based storage class.

Comment 11 Joy John Pinto 2023-09-15 14:54:43 UTC
Verified on OCP 4.14 (4.14.0-0.nightly-2023-09-15-055234) and ODF 4.14.0-134 arbiter mode stretch cluster, In the dropdown still thin-csi storage class is seen and upon clicking 'Add capacity" from UI with thin-csi storage class on lso stretch cluster new osd's are created using thin-csi storage class.

[jopinto@jopinto new]$ oc get pods -o wide -n openshift-storage |grep osd
rook-ceph-osd-0-5845d9c4c8-wcwvm                                  2/2     Running     0             96m    10.131.2.24    compute-0         <none>           <none>
rook-ceph-osd-1-5b44fdb6c7-xdd4k                                  2/2     Running     0             96m    10.128.2.23    compute-3         <none>           <none>
rook-ceph-osd-10-958bf7b78-7nj7l                                  2/2     Running     0             96m    10.129.2.21    compute-5         <none>           <none>
rook-ceph-osd-11-7966955484-lnjjj                                 2/2     Running     0             96m    10.129.2.22    compute-5         <none>           <none>
rook-ceph-osd-12-7b97966d68-qq7d4                                 2/2     Running     0             33m    10.131.0.34    compute-2         <none>           <none>
rook-ceph-osd-13-cc5796d5-zmp4n                                   2/2     Running     0             31m    10.130.2.47    compute-1         <none>           <none>
rook-ceph-osd-14-6bd4b9d954-lflrq                                 2/2     Running     0             31m    10.130.2.46    compute-1         <none>           <none>
rook-ceph-osd-15-79bf74456f-tx9z4                                 2/2     Running     0             31m    10.131.2.29    compute-0         <none>           <none>
rook-ceph-osd-16-7b5bbc8d4c-g9xck                                 2/2     Running     0             31m    10.128.2.32    compute-3         <none>           <none>
rook-ceph-osd-17-68785bd64f-6s77g                                 2/2     Running     0             31m    10.128.4.34    compute-4         <none>           <none>
rook-ceph-osd-18-6d67d9fc9d-gmzk9                                 2/2     Running     0             31m    10.128.2.41    compute-3         <none>           <none>
rook-ceph-osd-19-fbc456bf5-96r6h                                  2/2     Running     0             31m    10.128.4.33    compute-4         <none>           <none>
rook-ceph-osd-2-697c6fb95d-srfnl                                  2/2     Running     0             96m    10.128.4.23    compute-4         <none>           <none>
rook-ceph-osd-20-5dbbdb77bb-ktzmr                                 2/2     Running     0             31m    10.129.2.32    compute-5         <none>           <none>
rook-ceph-osd-21-6c78c68bf8-ns7t5                                 2/2     Running     0             31m    10.130.2.48    compute-1         <none>           <none>
rook-ceph-osd-22-594d58f68d-rqd6g                                 2/2     Running     0             31m    10.131.0.36    compute-2         <none>           <none>
rook-ceph-osd-23-578947974-tddjl                                  2/2     Running     0             31m    10.131.2.32    compute-0         <none>           <none>
rook-ceph-osd-3-5d7d64996b-ld77s                                  2/2     Running     0             96m    10.130.2.23    compute-1         <none>           <none>
rook-ceph-osd-4-79bff4bdfb-drzhq                                  2/2     Running     0             96m    10.128.2.26    compute-3         <none>           <none>
rook-ceph-osd-5-696b89b7b8-xlpbn                                  2/2     Running     0             96m    10.130.2.24    compute-1         <none>           <none>
rook-ceph-osd-6-79589bc569-vkgfb                                  2/2     Running     0             96m    10.128.4.26    compute-4         <none>           <none>
rook-ceph-osd-7-9b8fd668f-p6gnd                                   2/2     Running     0             96m    10.131.0.20    compute-2         <none>           <none>
rook-ceph-osd-8-75954bd45d-zkbj7                                  2/2     Running     0             96m    10.131.2.25    compute-0         <none>           <none>
rook-ceph-osd-9-f6c978c57-v2789                                   2/2     Running     0             96m    10.131.0.21    compute-2         <none>           <none>

[jopinto@jopinto new]$ oc get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                                            STORAGECLASS                  REASON   AGE
local-pv-190bf284                          100Gi      RWO            Delete           Available                                                                    localblock                             41m
local-pv-1f1b13f4                          100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-localblock-0-data-27bs6j         localblock                             102m
local-pv-2021a651                          100Gi      RWO            Delete           Available                                                                    localblock                             40m
local-pv-2d41bf11                          100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-localblock-3-data-0d2zm5         localblock                             103m
local-pv-3426d649                          100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-localblock-3-data-2tgw5t         localblock                             102m
local-pv-3d4d766d                          100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-localblock-2-data-14crkb         localblock                             102m
local-pv-5881e2bb                          100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-localblock-2-data-24tq57         localblock                             102m
local-pv-68c955be                          100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-localblock-2-data-07jf5w         localblock                             102m
local-pv-7909aae2                          100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-localblock-1-data-1cfp5l         localblock                             102m
local-pv-7cfb0f89                          100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-localblock-1-data-2b5k6j         localblock                             102m
local-pv-8de8debe                          100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-localblock-1-data-0mktsb         localblock                             103m
local-pv-aec55923                          100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-localblock-3-data-1f5lcl         localblock                             102m
local-pv-e1baee2c                          100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-localblock-0-data-1ml7nl         localblock                             102m
local-pv-e83be70                           100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-localblock-0-data-0xptdv         localblock                             102m
pvc-0b71b76a-1a06-4357-81e5-14e51b8de823   100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-thin-csi-0-data-24dfqc           thin-csi                               31m
pvc-11b2bf09-8a47-493c-8814-ff3c25d37d14   100Gi      RWX            Delete           Bound       openshift-image-registry/registry-cephfs-rwx-pvc                 ocs-storagecluster-cephfs              93m
pvc-21a2158a-437d-4a57-909d-060ed6141da3   40Gi       RWO            Delete           Bound       openshift-monitoring/my-alertmanager-claim-alertmanager-main-0   ocs-storagecluster-ceph-rbd            94m
pvc-2f9e2e6e-6908-42c8-8570-773fa3543f84   100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-thin-csi-1-data-18dq42           thin-csi                               31m
pvc-4bd6f193-a9bd-48aa-8f0a-d170eb1ed14b   50Gi       RWO            Delete           Bound       openshift-storage/db-noobaa-db-pg-0                              ocs-storagecluster-ceph-rbd            94m
pvc-4c62a012-1bd4-4380-86eb-f917aa53dadf   100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-thin-csi-1-data-04m68r           thin-csi                               33m
pvc-5115540b-c01a-4071-b473-07b519b8c6f3   40Gi       RWO            Delete           Bound       openshift-monitoring/my-prometheus-claim-prometheus-k8s-0        ocs-storagecluster-ceph-rbd            94m
pvc-65f453b5-2aa6-4659-89a6-a9d3562b5fd8   100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-thin-csi-0-data-0vscrt           thin-csi                               33m
pvc-79fde932-0862-4641-863a-387b632fd3f1   100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-thin-csi-3-data-2z4b6c           thin-csi                               31m
pvc-8f76cfa3-60d2-4efc-b1eb-ff1c48c0496a   100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-thin-csi-3-data-1chz7f           thin-csi                               31m
pvc-a00bd2e5-f7c4-4003-9ce2-403171da88c2   100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-thin-csi-2-data-2j4dm9           thin-csi                               31m
pvc-aade7110-09b7-4417-a736-62c846ec7fca   40Gi       RWO            Delete           Bound       openshift-monitoring/my-prometheus-claim-prometheus-k8s-1        ocs-storagecluster-ceph-rbd            94m
pvc-ba16add8-255b-4c34-8a9b-30878c90e38c   100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-thin-csi-1-data-2pmzcw           thin-csi                               31m
pvc-cd2b962e-8ef9-4042-8689-c43afd5b6dfe   100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-thin-csi-3-data-0fcmmm           thin-csi                               33m
pvc-d0b4e19a-0a92-47a7-9b85-b9a0a580acc1   40Gi       RWO            Delete           Bound       openshift-monitoring/my-alertmanager-claim-alertmanager-main-1   ocs-storagecluster-ceph-rbd            94m
pvc-e9c1f219-4eb3-49c1-ba0f-132330896316   100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-thin-csi-2-data-1cdbt2           thin-csi                               31m
pvc-f6f815c2-488a-4e24-b293-ed6785c61f2a   100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-thin-csi-0-data-1dx5sv           thin-csi                               31m
pvc-f7b542e6-9b1d-4ea0-9c8a-b5116e004c70   100Gi      RWO            Delete           Bound       openshift-storage/ocs-deviceset-thin-csi-2-data-07vqjd           thin-csi                               33m
[jopinto@jopinto new]$

Comment 12 Sanjal Katiyar 2023-09-19 13:44:29 UTC
PR for fix in the master: https://github.com/red-hat-storage/odf-console/pull/1049

Comment 13 Sanjal Katiyar 2023-09-19 14:12:09 UTC
since we are in blocker only phase, increasing the severity of this BZ to "high" so that we can take it up in 4.14 release, otherwise it could be a minor regression:

"minor" because we only used to show non-ceph/non-noobaa based storageclasses on this "Add Capacity" modal, but currently it is showing all the storageclasses (prov/no-prov/ceph-prov/noobaa-prov) >> this does not have any severe impact, still anyone can expand via ceph/noobaa based storageclasses by mistake, also it is kind of breaking existing UI functionality.

https://bugzilla.redhat.com/show_bug.cgi?id=2213183#c12 >> this should fix it.

Comment 15 Joy John Pinto 2023-10-06 09:50:31 UTC
Verified with OCP 4.14.0-0.nightly-2023-10-04-143709 and ODF 4.14.0-141 arbiter mode stretch cluster, In the dropdown only local-block storage class is seen (refer verification_bug_2213183.png) and add capacity is working as expected.

Comment 17 errata-xmlrpc 2023-11-08 18:51:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.14.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6832


Note You need to log in before you can comment on or make changes to this bug.