Bug 2128587 - csi rbd and cephfs plugin pods are not recreated after updating flag in rook-ceph-operator-config
Summary: csi rbd and cephfs plugin pods are not recreated after updating flag in rook-...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: rook
Version: 4.12
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ODF 4.12.0
Assignee: Nobody
QA Contact: Amrita Mahapatra
URL:
Whiteboard:
Depends On:
Blocks: 2039269 2041432
TreeView+ depends on / blocked
 
Reported: 2022-09-21 07:57 UTC by Amrita Mahapatra
Modified: 2023-08-09 17:03 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-02-08 14:06:28 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage rook pull 418 0 None Merged Resync from upstream 1.10 to downstream 4.12 2022-09-28 13:31:49 UTC

Description Amrita Mahapatra 2022-09-21 07:57:49 UTC
Description of problem (please be detailed as possible and provide log
snippests): csi rbd and cephfs plugin pods are not recreated after updating 'CSI_ENABLE_METADATA' flag in 'rook-ceph-operator-config' config map. And 'setmatadata =true' remained for csi-cephfsplugin and csi-rbdplugin under deployment pods, csi-cephfsplugin-provisioner and csi-rbdplugin-provisioner respectively when 'CSI_ENABLE_METADATA' flag is disabled.



Version of all relevant components (if applicable):
OCP version--- 4.12.0-0.nightly-2022-09-18-141547
ODF version--- 4.12.0-56
ceph version--- 16.2.10-41.el8cp
rook version--- v4.12.0-0.ffcae8e019e3e67f76c70c9badde72646034ec79


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)? No


Is there any workaround available to the best of your knowledge? No


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)? 4


Can this issue reproducible? Yes


Can this issue reproduce from the UI? Yes


If this is a regression, please provide more details to justify this: NA


Steps to Reproduce:
1. Install ODF 4.12 cluster
2. Update 'CSI_ENABLE_METADATA' flag in 'rook-ceph-operator-config' config map from UI or using the patch command below.

oc patch cm rook-ceph-operator-config -n openshift-storage -p $'data:\n "CSI_ENABLE_METADATA":  "true"'

Note:- To update from UI login to web console 
   Workloads --->ConfigMaps
 update configmap yaml for, rook-ceph-operator-config 

3. Check the 'setmatadata =true' and clustername got set for csi-cephfsplugin and csi-rbdplugin under deployment pods, csi-cephfsplugin-provisioner and csi-rbdplugin-provisioner respectively.

4. Check the rbd and cephfs plugin pods get recreated or not.
5. Disable 'CSI_ENABLE_METADATA' flag in 'rook-ceph-operator-config' config map using UI or patch request,

oc patch cm rook-ceph-operator-config -n openshift-storage -p $'data:\n "CSI_ENABLE_METADATA":  "false"'

6. Check the 'setmatadata' should not be set for csi-cephfsplugin and csi-rbdplugin under deployment pods, csi-cephfsplugin-provisioner and csi-rbdplugin-provisioner respectively.

7. Check the rbd and cephfs plugin pods get recreated or not.

Actual results: rbd and cephfs plugin pods are not getting recreated after updating the 'CSI_ENABLE_METADATA' flag in 'rook-ceph-operator-config' config map. And setmatadata =true' remained for csi-cephfsplugin and csi-rbdplugin under deployment pods, csi-cephfsplugin-provisioner and csi-rbdplugin-provisioner respectively when 'CSI_ENABLE_METADATA' flag is disabled.


Expected results: rbd and cephfs plugin pods should get recreated after updating the 'CSI_ENABLE_METADATA' flag in 'rook-ceph-operator-config' config map and 'setmatadata' should not be set for csi-cephfsplugin and csi-rbdplugin under deployment pods, csi-cephfsplugin-provisioner and csi-rbdplugin-provisioner respectively when 'CSI_ENABLE_METADATA' flag is disabled.


Additional info:


Note You need to log in before you can comment on or make changes to this bug.