Cloned from bz1872119 to keep track of SC update and the related discussions in the bz1872119.
(In reply to Jilju Joy from comment #2) Correction: > Cloned from bz1872119 to keep track of SC update and the related discussions bz1846085 > in the bz1872119. bz1846085
Marking as a blocker to make sure this is done for 4.6
This is applicable for cluster which is upgraded to OCS 4.5 and then to OCS 4.6.
A PR against master has been submitted to remove the need for a StorageClusterInitialization CR: https://github.com/openshift/ocs-operator/pull/789 This means we will now have the ability to control whether or not to reconcile the auxiliary resources (e.g. StorageClasses), but will require the admin to manually toggle the desired booleans after upgrade. This is needed to preserve upgrade compatibility with current behavior.
*** Bug 1881526 has been marked as a duplicate of this bug. ***
This fix will enable expansion for new PVCs in the upgraded cluster. The PVCs which were created before upgrade (except fresh installation of OCS 4.5) will not have expansion capability. Hi Humble, Please confirm this.
(In reply to Jilju Joy from comment #11) > This fix will enable expansion for new PVCs in the upgraded cluster. The > PVCs which were created before upgrade (except fresh installation of OCS > 4.5) will not have expansion capability. > Hi Humble, > Please confirm this. afaict, the Fix mentioned in c#7, just enable or allow the user to control SC recreation, nothing else. Regardless, the behaviour of PV expansion is totally depend on which SC you have in place, a recreated one or the old one ( which is WITHOUT allowVolumeExpansion requirements), also OLD PVs dont get any benefit here with the fix mentioned above. Does that clarify the question @jilju? @Jose, please correct the understanding about the fix in c#7 if I was wrong.
This PR will enable the ability to reconcile the default StorageClasses. For existing OCS installations that upgrade to OCS 4.6, they will be able to enable this feature to update their StorageClasses to allow volume expansion, meaning any NEW PVCs created against those StorageClasses will be expandable. PVs which were created against the same StorageClasses prior to the upgrade will not be expandable, but that is a different BZ than this one.
(In reply to Humble Chirammal from comment #12) > (In reply to Jilju Joy from comment #11) > > This fix will enable expansion for new PVCs in the upgraded cluster. The > > PVCs which were created before upgrade (except fresh installation of OCS > > 4.5) will not have expansion capability. > > Hi Humble, > > Please confirm this. > > afaict, the Fix mentioned in c#7, just enable or allow the user to control > SC recreation, nothing else. Regardless, the behaviour of PV expansion is > totally depend on which SC you have in place, a recreated one or the old one > ( which is WITHOUT allowVolumeExpansion requirements), also OLD PVs dont get > any benefit here with the fix mentioned above. > > Does that clarify the question @jilju? Yes, Humble. Thanks ! > > @Jose, please correct the understanding about the fix in c#7 if I was wrong.
(In reply to Jose A. Rivera from comment #7) > A PR against master has been submitted to remove the need for a > StorageClusterInitialization CR: > https://github.com/openshift/ocs-operator/pull/789 > > This means we will now have the ability to control whether or not to > reconcile the auxiliary resources (e.g. StorageClasses), but will require > the admin to manually toggle the desired booleans after upgrade. This is Hi Jose, What are the manual steps required ? Is it possible from UI ? Upgraded cluster from OCS 4.4.2 to OCS 4.6.0-144. Need to know the manual steps to proceed further. > needed to preserve upgrade compatibility with current behavior.
(In reply to Jilju Joy from comment #16) > (In reply to Jose A. Rivera from comment #7) > > A PR against master has been submitted to remove the need for a > > StorageClusterInitialization CR: > > https://github.com/openshift/ocs-operator/pull/789 > > > > This means we will now have the ability to control whether or not to > > reconcile the auxiliary resources (e.g. StorageClasses), but will require > > the admin to manually toggle the desired booleans after upgrade. This is > > Hi Jose, > > What are the manual steps required ? Is it possible from UI ? > Upgraded cluster from OCS 4.4.2 to OCS 4.6.0-144. Need to know the manual > steps to proceed further. Performed these steps: 1. Installed OCS 4.4.2 - OCP 4.4 cluster. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.4.27 True False 96m Cluster version is 4.4.27 $ oc get csv NAME DISPLAY VERSION REPLACES PHASE ocs-operator.v4.4.2 OpenShift Container Storage 4.4.2 Succeeded $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate false 60m ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate false 60m openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 53m thin kubernetes.io/vsphere-volume Delete Immediate false 108m 2. Upgrade to OCP 4.5 $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.5.15 True False 34m Cluster version is 4.5.15 3. Upgrade to OCS 4.5 $ oc get csv -n openshift-storage NAME DISPLAY VERSION REPLACES PHASE ocs-operator.v4.5.1 OpenShift Container Storage 4.5.1 ocs-operator.v4.4.2 Succeeded 4. Upgrade to OCS 4.6 $ oc get csv -n openshift-storage NAME DISPLAY VERSION REPLACES PHASE ocs-operator.v4.6.0-144.ci OpenShift Container Storage 4.6.0-144.ci ocs-operator.v4.5.1 Succeeded 5. Performed the step suggested by Jose. $ oc edit storagecluster ocs-storagecluster -n openshift-storage Set Spec.ManagedResources.StorageClasses.reconcileStrategy to manage in the StorageCluster Before edit: spec: encryption: {} externalStorage: {} managedResources: cephBlockPools: {} cephFilesystems: {} cephObjectStoreUsers: {} cephObjectStores: {} snapshotClasses: {} storageClasses: {} After edit: spec: encryption: {} externalStorage: {} managedResources: cephBlockPools: {} cephFilesystems: {} cephObjectStoreUsers: {} cephObjectStores: {} snapshotClasses: {} storageClasses: reconcileStrategy: manage SCs ocs-storagecluster-ceph-rbd and ocs-storagecluster-cephfs did not reconcile to enable volume expansion. $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate false 7h45m ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate false 7h45m > > > needed to preserve upgrade compatibility with current behavior.
Moving it back to ON_QA as https://github.com/openshift/ocs-operator/pull/871 has the fix for this.
Continued from step 5 in comment #17 Reverted the change done in step 5 in comment #17. spec: encryption: {} externalStorage: {} managedResources: cephBlockPools: {} cephFilesystems: {} cephObjectStoreUsers: {} cephObjectStores: {} snapshotClasses: {} storageClasses: {} Upgraded to OCS version 4.6.0-148 $ oc -n openshift-storage get csv NAME DISPLAY VERSION REPLACES PHASE ocs-operator.v4.6.0-148.ci OpenShift Container Storage 4.6.0-148.ci ocs-operator.v4.6.0-144.ci Succeeded Storage classes ocs-storagecluster-ceph-rbd and ocs-storagecluster-cephfs did not re-create automatically. 5. Performed the step suggested by Jose. (same step 5 from comment #17) $ oc edit storagecluster ocs-storagecluster -n openshift-storage Set Spec.ManagedResources.StorageClasses.reconcileStrategy to manage in the StorageCluster Before edit: spec: encryption: {} externalStorage: {} managedResources: cephBlockPools: {} cephFilesystems: {} cephObjectStoreUsers: {} cephObjectStores: {} snapshotClasses: {} storageClasses: {} After edit: spec: encryption: {} externalStorage: {} managedResources: cephBlockPools: {} cephFilesystems: {} cephObjectStoreUsers: {} cephObjectStores: {} snapshotClasses: {} storageClasses: reconcileStrategy: manage Step 5 can be done using oc patch command as well. oc -n openshift-storage patch storagecluster ocs-storagecluster -p '{"spec":{"managedResources":{"storageClasses":{"reconcileStrategy": "manage"}}}}' --type=merge 6. Verify storage classes ocs-storagecluster-ceph-rbd and ocs-storagecluster-cephfs $ oc get sc ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 28s ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 29s $ oc get sc ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs -o yaml apiVersion: v1 items: - allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: "2020-10-29T11:38:46Z" managedFields: - apiVersion: storage.k8s.io/v1 fieldsType: FieldsV1 fieldsV1: f:allowVolumeExpansion: {} f:parameters: .: {} f:clusterID: {} f:csi.storage.k8s.io/controller-expand-secret-name: {} f:csi.storage.k8s.io/controller-expand-secret-namespace: {} f:csi.storage.k8s.io/fstype: {} f:csi.storage.k8s.io/node-stage-secret-name: {} f:csi.storage.k8s.io/node-stage-secret-namespace: {} f:csi.storage.k8s.io/provisioner-secret-name: {} f:csi.storage.k8s.io/provisioner-secret-namespace: {} f:imageFeatures: {} f:imageFormat: {} f:pool: {} f:provisioner: {} f:reclaimPolicy: {} f:volumeBindingMode: {} manager: ocs-operator operation: Update time: "2020-10-29T11:38:46Z" name: ocs-storagecluster-ceph-rbd resourceVersion: "2459259" selfLink: /apis/storage.k8s.io/v1/storageclasses/ocs-storagecluster-ceph-rbd uid: 75b1019c-2857-457e-8d9b-861584ab2c4f parameters: clusterID: openshift-storage csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage csi.storage.k8s.io/fstype: ext4 csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage imageFeatures: layering imageFormat: "2" pool: ocs-storagecluster-cephblockpool provisioner: openshift-storage.rbd.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate - allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: "2020-10-29T11:38:46Z" managedFields: - apiVersion: storage.k8s.io/v1 fieldsType: FieldsV1 fieldsV1: f:allowVolumeExpansion: {} f:parameters: .: {} f:clusterID: {} f:csi.storage.k8s.io/controller-expand-secret-name: {} f:csi.storage.k8s.io/controller-expand-secret-namespace: {} f:csi.storage.k8s.io/node-stage-secret-name: {} f:csi.storage.k8s.io/node-stage-secret-namespace: {} f:csi.storage.k8s.io/provisioner-secret-name: {} f:csi.storage.k8s.io/provisioner-secret-namespace: {} f:fsName: {} f:provisioner: {} f:reclaimPolicy: {} f:volumeBindingMode: {} manager: ocs-operator operation: Update time: "2020-10-29T11:38:46Z" name: ocs-storagecluster-cephfs resourceVersion: "2459253" selfLink: /apis/storage.k8s.io/v1/storageclasses/ocs-storagecluster-cephfs uid: 2d983cd1-c31a-4082-9608-b2e4a3aee86e parameters: clusterID: openshift-storage csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage fsName: ocs-storagecluster-cephfilesystem provisioner: openshift-storage.cephfs.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate kind: List metadata: resourceVersion: "" selfLink: "" Storage classes ocs-storagecluster-ceph-rbd and ocs-storagecluster-cephfs got recreated. Volume expansion parameters are set. 7. Run ocs-ci test case tests/manage/pv_services/pvc_resize/test_pvc_expansion.py Test case passed. Verified in version: ocs-operator.v4.6.0-148.ci Cluster version is 4.6.0
Hi Jose, Please check the steps performed and let me know if any additional check should be done. We also need to update the Doc Text to cover the manual step required to enable expansion.
(In reply to Jilju Joy from comment #20) > Hi Jose, > > Please check the steps performed and let me know if any additional check > should be done. > We also need to update the Doc Text to cover the manual step required to > enable expansion. Doc text in the BZ might not help, better to create a doc BZ for this.
Please update the status and 'Fixed in version' based on this PR https://github.com/openshift/ocs-operator/pull/897#issuecomment-728142130
Hi Mudit, I checked the storagecluster spec in a fresh deployment of OCS 4.6.0-160 spec.managedResources.storageClasses is found empty. reconcileStrategy: manage is expected as the default value in spec.managedResources.storageClasses spec: encryption: {} externalStorage: {} managedResources: cephBlockPools: {} cephFilesystems: {} cephObjectStoreUsers: {} cephObjectStores: {} snapshotClasses: {} storageClasses: {} [jijoy@localhost ocs-ci]$ oc get storagecluster ocs-storagecluster -o yaml | grep manage managedFields: manager: Mozilla f:managedResources: manager: ocs-operator managedResources: [jijoy@localhost ocs-ci]$ $ oc get csv NAME DISPLAY VERSION REPLACES PHASE ocs-operator.v4.6.0-160.ci OpenShift Container Storage 4.6.0-160.ci Succeeded $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.0-0.nightly-2020-11-15-104235 True False 82m Cluster version is 4.6.0-0.nightly-2020-11-15-104235 Can you please confirm whether the expectation is correct ?
OCS installation mentioned in comment #23 was done from UI.
Jose is the right person to verify it.
It should be empty. Empty == default behavior, which is implicitly 'manage'.
(In reply to Jose A. Rivera from comment #26) > It should be empty. Empty == default behavior, which is implicitly 'manage'. OHK... does that mean we do not need to look for "reconcileStrategy: managed" for all the ManagedResources to confirm the setting? .. Now default behavior is Managed in code itself ? Actually wanted to confirm if we need to make any changes in the storagecluster.yaml template we use in ocs-ci to make this implicit managed behavior applicable ? In ocs-ci kind: StorageCluster metadata: name: ocs-storagecluster namespace: openshift-storage spec: storageDeviceSets: - count: 1 dataPVCTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: gp2 volumeMode: Block name: ocs-deviceset placement: {} portable: true replica: 3 resources: {}
(In reply to Neha Berry from comment #27) > (In reply to Jose A. Rivera from comment #26) > > It should be empty. Empty == default behavior, which is implicitly 'manage'. > > OHK... does that mean we do not need to look for "reconcileStrategy: > managed" for all the ManagedResources to confirm the setting? .. Now default > behavior is Managed in code itself ? Yes. > Actually wanted to confirm if we need to make any changes in the > storagecluster.yaml template we use in ocs-ci to make this implicit managed > behavior applicable ? Correct.
Verified in version: ocs-operator.v4.6.0-160.ci Steps performed : Step 1: Installed OCS 4.4 cluster. $ oc get csv NAME DISPLAY VERSION REPLACES PHASE ocs-operator.v4.4.2 OpenShift Container Storage 4.4.2 Succeeded $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.4.30 True False 56m Cluster version is 4.4.30 $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate false 100m ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate false 100m openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 92m thin kubernetes.io/vsphere-volume Delete Immediate false 132m $ oc get sc ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs -o yaml apiVersion: v1 items: - allowVolumeExpansion: false apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: "2020-11-19T09:47:31Z" name: ocs-storagecluster-ceph-rbd resourceVersion: "39339" selfLink: /apis/storage.k8s.io/v1/storageclasses/ocs-storagecluster-ceph-rbd uid: fd8b899e-79ac-4cf7-8b05-b41ffcef13bc parameters: clusterID: openshift-storage csi.storage.k8s.io/fstype: ext4 csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage imageFeatures: layering imageFormat: "2" pool: ocs-storagecluster-cephblockpool provisioner: openshift-storage.rbd.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate - allowVolumeExpansion: false apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: "2020-11-19T09:47:31Z" name: ocs-storagecluster-cephfs resourceVersion: "39338" selfLink: /apis/storage.k8s.io/v1/storageclasses/ocs-storagecluster-cephfs uid: d5b63f6f-5dc6-4a7e-a40a-f2bd4d4d7c5f parameters: clusterID: openshift-storage csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage fsName: ocs-storagecluster-cephfilesystem provisioner: openshift-storage.cephfs.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate kind: List metadata: resourceVersion: "" selfLink: "" Step 2: Create RBD and CephFS PVCs (including RBD block volume mode PVC) of size 5 Gi. Create app pods. Step 3: Upgrade to OCP 4.5.18 $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.5.18 True False 17m Cluster version is 4.5.18 Step 4: Upgrade to OCS 4.5.2 $ oc get csv NAME DISPLAY VERSION REPLACES PHASE ocs-operator.v4.5.2 OpenShift Container Storage 4.5.2 ocs-operator.v4.4.2 Succeeded $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate false 5h10m ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate false 5h10m openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 5h2m thin kubernetes.io/vsphere-volume Delete Immediate false 5h42m Step 5: Create RBD and CephFS PVCs (including RBD block volume mode PVC) of size 5 Gi. Create app pods. Step 6: Upgrade to OCS 4.6 $ oc get csv NAME DISPLAY VERSION REPLACES PHASE ocs-operator.v4.5.2 OpenShift Container Storage 4.5.2 ocs-operator.v4.4.2 Replacing ocs-operator.v4.6.0-160.ci OpenShift Container Storage 4.6.0-160.ci ocs-operator.v4.5.2 Installing $ oc get csv NAME DISPLAY VERSION REPLACES PHASE ocs-operator.v4.6.0-160.ci OpenShift Container Storage 4.6.0-160.ci ocs-operator.v4.5.2 Succeeded Step 7: Verify storage class. Storage classes ocs-storagecluster-ceph-rbd and ocs-storagecluster-cephfs are recreated. allowVolumeExpansion is set to true. Expand secrets are also present. $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 22m ocs-storagecluster-ceph-rgw openshift-storage.ceph.rook.io/bucket Delete Immediate false 22m ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 22m openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 6h55m thin kubernetes.io/vsphere-volume Delete Immediate false 7h34m $ oc get sc ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs -o yaml apiVersion: v1 items: - allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: "2020-11-19T16:28:05Z" managedFields: - apiVersion: storage.k8s.io/v1 fieldsType: FieldsV1 fieldsV1: f:allowVolumeExpansion: {} f:parameters: .: {} f:clusterID: {} f:csi.storage.k8s.io/controller-expand-secret-name: {} f:csi.storage.k8s.io/controller-expand-secret-namespace: {} f:csi.storage.k8s.io/fstype: {} f:csi.storage.k8s.io/node-stage-secret-name: {} f:csi.storage.k8s.io/node-stage-secret-namespace: {} f:csi.storage.k8s.io/provisioner-secret-name: {} f:csi.storage.k8s.io/provisioner-secret-namespace: {} f:imageFeatures: {} f:imageFormat: {} f:pool: {} f:provisioner: {} f:reclaimPolicy: {} f:volumeBindingMode: {} manager: ocs-operator operation: Update time: "2020-11-19T16:28:05Z" name: ocs-storagecluster-ceph-rbd resourceVersion: "246978" selfLink: /apis/storage.k8s.io/v1/storageclasses/ocs-storagecluster-ceph-rbd uid: 5477ecba-4ad8-4e09-8e48-8c3ca629df2a parameters: clusterID: openshift-storage csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage csi.storage.k8s.io/fstype: ext4 csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage imageFeatures: layering imageFormat: "2" pool: ocs-storagecluster-cephblockpool provisioner: openshift-storage.rbd.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate - allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: "2020-11-19T16:28:05Z" managedFields: - apiVersion: storage.k8s.io/v1 fieldsType: FieldsV1 fieldsV1: f:allowVolumeExpansion: {} f:parameters: .: {} f:clusterID: {} f:csi.storage.k8s.io/controller-expand-secret-name: {} f:csi.storage.k8s.io/controller-expand-secret-namespace: {} f:csi.storage.k8s.io/node-stage-secret-name: {} f:csi.storage.k8s.io/node-stage-secret-namespace: {} f:csi.storage.k8s.io/provisioner-secret-name: {} f:csi.storage.k8s.io/provisioner-secret-namespace: {} f:fsName: {} f:provisioner: {} f:reclaimPolicy: {} f:volumeBindingMode: {} manager: ocs-operator operation: Update time: "2020-11-19T16:28:05Z" name: ocs-storagecluster-cephfs resourceVersion: "246974" selfLink: /apis/storage.k8s.io/v1/storageclasses/ocs-storagecluster-cephfs uid: f49f65e1-e343-48dd-8911-248563079b18 parameters: clusterID: openshift-storage csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage fsName: ocs-storagecluster-cephfilesystem provisioner: openshift-storage.cephfs.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate kind: List metadata: resourceVersion: "" selfLink: "" Step 8: Executed test case tests/manage/pv_services/pvc_resize/test_pvc_expansion.py::TestPvcExpand::test_pvc_expansion Test case passed. Verified expansion capability of new PVCs. Result - https://ocs4-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/qe-deploy-ocs-cluster/14847/testReport/ Step 9: Expand the PVCs created in Step 2 and Step 5 to 10Gi. Verified that the capacity of PVCs are expanded to 10Gi. Veried the change on pods. Ran IO to utilize the expanded volume. This step verified expansion capability of old PVCs - https://bugzilla.redhat.com/show_bug.cgi?id=1859183
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5605