Description of problem (please be detailed as possible and provide log snippests): StorageCluster is not recreated after a StorageCluster is deleted. Version of all relevant components (if applicable): ocs-operator master Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? No Is there any workaround available to the best of your knowledge? Manually delete storageclass after deleting the StorageCluster before StorageCluster. Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? yes Can this issue reproduce from the UI? not sure If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. oc create -f deploy/deploy-with-olm.yaml 2. oc create -f deploy/crds/ocs_v1_storagecluster_cr.yaml 3. after StorageCluster is created and completed reconciling, oc delete -f deploy/deploy-with-olm.yaml 4. oc delete -f deploy/crds/ocs_v1_storagecluster_cr.yaml 5. delete the finalizers that prevent the OCS cluster to be deleted. 6. oc create -f deploy/deploy-with-olm.yaml 7. oc create -f deploy/crds/ocs_v1_storagecluster_cr.yaml Actual results: StorageCluster is not created. And the ocs-operator is filled with the following logs. ``` {"level":"error","ts":"2020-06-10T11:50:29.078Z","logger":"controller-runtime.controller","msg":"Reconciler error","controller":"storagecluster-controller","request":"openshift-storage/example-storagecluster","error":"StorageClass.storage.k8s.io \"example-storagecluster-cephfs\" is invalid: parameters: Forbidden: updates to parameters are forbidden.","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/rgupta/go/src/github.com/openshift/ocs-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/rgupta/go/src/github.com/openshift/ocs-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/rgupta/go/src/github.com/openshift/ocs-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/home/rgupta/go/src/github.com/openshift/ocs-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/home/rgupta/go/src/github.com/openshift/ocs-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/home/rgupta/go/src/github.com/openshift/ocs-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/home/rgupta/go/src/github.com/openshift/ocs-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} ``` Expected results: StorageCluster should have been created. Additional info: Storage Class was not deleted when the Storage cluster was first deleted. As per Madhu Rajanna and Umanga Chapagain, Updates don't work on StorageClass.
Just to summarize the discussions happened between the operator and CSI team. The SC is immutable after its creation, so the recreation of the SC is one option, but we have to verify the behaviour on the same and measure any impact. We need to have an upgrade strategy in general for the resources like this as we will have same pattern of sc changes or other object changes in various situations. It's also good to validate/check on how other OCP controllers maintain a similar pattern at upgrade time.
One other solution is, totally enable new SC for expansion may be another one for compression/encryption..etc in future. because OLD PVCs created on old SCs can not be expanded anyway. if the user really want to have new expandable PVCs on existing default SCs on latest version of OCS (say 4.5), then only recreating default SC make sense. In that case, admin also got a choice of doing it manually.
I am not 100% sure I understand the problem yet. If I get it correctly, ocs-operator now tries to reconcile storage classes when reconciling the StorageCluster. This was not originally done. AFAIK, the original design was to create storage classes once at initialization, but not to reconcile them later. This new behavior seems to be a regression introduced by the work for external cluster. Specifically: https://github.com/openshift/ocs-operator/pull/500 https://github.com/openshift/ocs-operator/commit/be86eedd55a6bca491682de1b1696534794a3f08 This is a design change that should IMHO be reverted and at the very least needs to be re-discussed and fixed.
Marking as a regression based on comment #6
I think we shouldn't mix issues here. I see possibly 3 different issues: (1) The original issue of this BZ: uninstall storagecluster followed by install fails because storage class was still there. Not sure why it fails trying to update the SC - have you re-installed with a newer version? Anyway, this one will be fixed by https://github.com/openshift/ocs-operator/pull/556 which should get into one of the next builds. (2) Many comments (but not the original description) mentioned upgrade of OCS. Whenever the ocs operator wants to change a SC, it needs to delete and re-create it. ==> We need a fix here, afaict. (3) How to enable the new aspects of the SC for expansion. This needs separate discussion if we want to do it for the user/customer, or if we want to leave it up to them. Cheers - Michael
Rohan, are you taking the exact steps you described? 1. oc create -f deploy/deploy-with-olm.yaml 2. oc create -f deploy/crds/ocs_v1_storagecluster_cr.yaml 3. after StorageCluster is created and completed reconciling, oc delete -f deploy/deploy-with-olm.yaml 4. oc delete -f deploy/crds/ocs_v1_storagecluster_cr.yaml 5. delete the finalizers that prevent the OCS cluster to be deleted. 6. oc create -f deploy/deploy-with-olm.yaml 7. oc create -f deploy/crds/ocs_v1_storagecluster_cr.yaml If so, I don't know what you're trying to do but this is an invalid operation. What are you actually trying to test?
@Rohan, From what you describe, the steps you are doing here are not really a valid procedure, but you seem to have found a valid bug. The valid steps for deleting the storage cluster and trying to create a new one would be something like: 1. oc create -f deploy/deploy-with-olm.yaml 2. oc create -f deploy/crds/ocs_v1_storagecluster_cr.yaml 3. wait for StorageCluster to be created and completed reconciling 4. oc delete -f deploy/crds/ocs_v1_storagecluster_cr.yaml 5. delete the finalizers that prevent the OCS cluster to be deleted. 6. oc create -f deploy/crds/ocs_v1_storagecluster_cr.yaml I.e. don't delete deploy-with-olm.yaml in between. (You might need to modify the storageclass to mention the wiping of (meta)data to be able to re-install.) Can you reproduce the problem with steps like these? The actual problem uncovered here seems to be that https://github.com/openshift/ocs-operator/blob/master/pkg/controller/storagecluster/initialization_reconciler.go#L71 uses Update on a storage class which it shouldn't. A second layer problem is that the deletion of the storagecluster does not delete the storageclasses. This is fixed by https://github.com/openshift/ocs-operator/pull/556 and would serve as a fix for the immediate bug. But we should still fix the actual probelem.
@Elad, FYI, for all I can tell, this is NOT a regression but has been there since the beginning.
It's unclear to me why the severity is high - how is the regular OCS user impacted by this?
The original issue of this BZ should be resolved by https://github.com/openshift/ocs-operator/pull/574, so moving this to MODIFIED.
[Additional note] https://github.com/openshift/ocs-operator/pull/590 handles the upgrade path of SC too. Adding it here for completeness.
Moving it to POST. This the PR for backport https://github.com/openshift/ocs-operator/pull/600
merged
Performed upgrade to 4.5. Update was successful but the storage class ocs-storagecluster-ceph-rbd and ocs-storagecluster-cephfs were not recreated and the volume expansion is not enabled on these SCs. Install and upgrade was performed on OCP Cluster version is 4.4.12 After upgrade: $ oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 115m Ready 2020-07-15T05:57:58Z 4.4.0 ( The mismatch in storagecluster version is a known issue. https://bugzilla.redhat.com/show_bug.cgi?id=1839988, https://bugzilla.redhat.com/show_bug.cgi?id=1855339) $ oc get csv NAME DISPLAY VERSION REPLACES PHASE lib-bucket-provisioner.v2.0.0 lib-bucket-provisioner 2.0.0 lib-bucket-provisioner.v1.0.0 Succeeded ocs-operator.v4.5.0-487.ci OpenShift Container Storage 4.5.0-487.ci ocs-operator.v4.4.2-483.ci Succeeded $ oc get sc ocs-storagecluster-ceph-rbd -o yaml allowVolumeExpansion: false apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: "2020-07-15T05:57:58Z" name: ocs-storagecluster-ceph-rbd resourceVersion: "21887" selfLink: /apis/storage.k8s.io/v1/storageclasses/ocs-storagecluster-ceph-rbd uid: 4092c989-8bf4-4f41-bf89-da3f19aa2409 parameters: clusterID: openshift-storage csi.storage.k8s.io/fstype: ext4 csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage imageFeatures: layering imageFormat: "2" pool: ocs-storagecluster-cephblockpool provisioner: openshift-storage.rbd.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate $ oc get sc ocs-storagecluster-cephfs -o yaml allowVolumeExpansion: false apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: "2020-07-15T05:57:58Z" name: ocs-storagecluster-cephfs resourceVersion: "21885" selfLink: /apis/storage.k8s.io/v1/storageclasses/ocs-storagecluster-cephfs uid: f04def2c-9fd0-425d-b752-10b9d502b080 parameters: clusterID: openshift-storage csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage fsName: ocs-storagecluster-cephfilesystem provisioner: openshift-storage.cephfs.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate Isn't these storage classes are expected to be updated according to #comment18 ?
Logs collected from the upgraded cluster - http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/jijoy-45-44/jijoy-45-44_20200715T052124/logs/failed_testcase_ocs_logs_1594802940/test_pvc_expansion_ocs_logs/
Based on #comment23 , marking this as FailedQa
The StorageClasses are part of the StorageClusterInitialization, and thus BY DESIGN are only ever reconciled once. The correct thing to do in this case would be to delete the StorageClusterInitialization to trigger the reconciliation again. Thus, the actual code changes that had been made have not been tested. Please retest using the describe procedure. This is yet another symptom of "initialization" tasks, which go against the operator model, and several people warned that we'd run into upgrade issues like this in the future. We need to have another serious conversation about just how much freedom we want to grant our customers and how much we want to take out of their control, but that is beyond the scope of this BZ. For now, there is nothing to do in the OCS Operator, merely additions to the documentation.
(In reply to Jose A. Rivera from comment #26) > The StorageClasses are part of the StorageClusterInitialization, and thus BY > DESIGN are only ever reconciled once. The correct thing to do in this case > would be to delete the StorageClusterInitialization to trigger the > reconciliation again. Thus, the actual code changes that had been made have > not been tested. Please retest using the describe procedure. > The update from previous SC to new SC with the new params is planned to be handled by the operator without any manual intervention. This is to enable volume expansion params after upgrading to OCS 4.5. Setting needinfo on Umanga and Humble to confirm. > This is yet another symptom of "initialization" tasks, which go against the > operator model, and several people warned that we'd run into upgrade issues > like this in the future. We need to have another serious conversation about > just how much freedom we want to grant our customers and how much we want to > take out of their control, but that is beyond the scope of this BZ. For now, > there is nothing to do in the OCS Operator, merely additions to the > documentation.
(In reply to Jilju Joy from comment #27) > (In reply to Jose A. Rivera from comment #26) > > The StorageClasses are part of the StorageClusterInitialization, and thus BY > > DESIGN are only ever reconciled once. The correct thing to do in this case > > would be to delete the StorageClusterInitialization to trigger the > > reconciliation again. Thus, the actual code changes that had been made have > > not been tested. Please retest using the describe procedure. > > > The update from previous SC to new SC with the new params is planned to be > handled by the operator without any manual intervention. This is to enable > volume expansion params after upgrading to OCS 4.5. Setting needinfo on > Umanga and Humble to confirm. > True, we were also having confirmation from the operator team on this in the chat/discussions couple of times. Umanaga/Rajat can you look into this ?
Tested after upgrade from OCS 4.4.1 to 4.5.0-494. The storage class ocs-storagecluster-ceph-rbd and ocs-storagecluster-cephfs were not updated/recreated. So volume expansion is not supported after upgrade. $ oc get csv -n openshift-storage | grep ocs-operator ocs-operator.v4.5.0-494.ci OpenShift Container Storage 4.5.0-494.ci ocs-operator.v4.4.1 Succeeded $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.5.0-0.nightly-2020-07-20-152128 True False 117m Cluster version is 4.5.0-0.nightly-2020-07-20-152128
(In reply to Jilju Joy from comment #31) > Tested after upgrade from OCS 4.4.1 to 4.5.0-494. The storage class > ocs-storagecluster-ceph-rbd and ocs-storagecluster-cephfs were not > updated/recreated. So volume expansion is not supported after upgrade. This is correct. You still have to delete the storageclusterinitialization cr to trigger the upgrade of the storage class. Previously, this would have failed. Now it should succeed.
Moving back to ON_QA to verify.
I believe there is no further discussion to be had for OCS 4.5. The manual step will be required. Please verify that deleting the StorageClusterInitialization works as intended.
As discussed in the meeting, We are not supporting PV expansion for upgraded clusters. Hence, no need to change the default SCs. In worst case, if customers really need it, they might have to contact support it was decided that this BZ will be moved out to 4.6. I am not sure why it is still in 4.5 and OM_QA ? Did the plan change again or am I missing anything?
Verified that the SCs ocs-storagecluster-ceph-rbd and ocs-storagecluster-cephfs will be re-created by deleting storageclusterinitialization. Steps performed: Before upgrade. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.5.3 True False 123m Cluster version is 4.5.3 and ocs-operator.v4.4.1 After upgrade $ oc get csv NAME DISPLAY VERSION REPLACES PHASE awss3operator.1.0.1 AWS S3 Operator 1.0.1 awss3operator.1.0.0 Succeeded ocs-operator.v4.5.0-508.ci OpenShift Container Storage 4.5.0-508.ci ocs-operator.v4.4.1 Succeeded $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 135m ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate false 122m ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate false 122m openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 117m Step1: Checked SC after upgrade. Volume expansion parameters are not present. $ oc get sc ocs-storagecluster-ceph-rbd -o yaml allowVolumeExpansion: false apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: "2020-07-29T05:52:15Z" managedFields: - apiVersion: storage.k8s.io/v1 fieldsType: FieldsV1 fieldsV1: f:allowVolumeExpansion: {} f:parameters: .: {} f:clusterID: {} f:csi.storage.k8s.io/fstype: {} f:csi.storage.k8s.io/node-stage-secret-name: {} f:csi.storage.k8s.io/node-stage-secret-namespace: {} f:csi.storage.k8s.io/provisioner-secret-name: {} f:csi.storage.k8s.io/provisioner-secret-namespace: {} f:imageFeatures: {} f:imageFormat: {} f:pool: {} f:provisioner: {} f:reclaimPolicy: {} f:volumeBindingMode: {} manager: ocs-operator operation: Update time: "2020-07-29T05:52:15Z" name: ocs-storagecluster-ceph-rbd resourceVersion: "23081" selfLink: /apis/storage.k8s.io/v1/storageclasses/ocs-storagecluster-ceph-rbd uid: be2a29d7-96de-48bf-99e4-ad72c7573eae parameters: clusterID: openshift-storage csi.storage.k8s.io/fstype: ext4 csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage imageFeatures: layering imageFormat: "2" pool: ocs-storagecluster-cephblockpool provisioner: openshift-storage.rbd.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate $ oc get sc ocs-storagecluster-cephfs -o yaml allowVolumeExpansion: false apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: "2020-07-29T05:52:15Z" managedFields: - apiVersion: storage.k8s.io/v1 fieldsType: FieldsV1 fieldsV1: f:allowVolumeExpansion: {} f:parameters: .: {} f:clusterID: {} f:csi.storage.k8s.io/node-stage-secret-name: {} f:csi.storage.k8s.io/node-stage-secret-namespace: {} f:csi.storage.k8s.io/provisioner-secret-name: {} f:csi.storage.k8s.io/provisioner-secret-namespace: {} f:fsName: {} f:provisioner: {} f:reclaimPolicy: {} f:volumeBindingMode: {} manager: ocs-operator operation: Update time: "2020-07-29T05:52:15Z" name: ocs-storagecluster-cephfs resourceVersion: "23080" selfLink: /apis/storage.k8s.io/v1/storageclasses/ocs-storagecluster-cephfs uid: 6ef0c10f-cc9f-47e7-b1a2-496ae1a716c1 parameters: clusterID: openshift-storage csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage fsName: ocs-storagecluster-cephfilesystem provisioner: openshift-storage.cephfs.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate Step 2: Deleted storageclusterinitialization. $ oc delete storageclusterinitialization ocs-storagecluster storageclusterinitialization.ocs.openshift.io "ocs-storagecluster" deleted $ oc get storageclusterinitialization NAME AGE ocs-storagecluster 38s Check age of SC. SC got re-created. $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 138m ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 50s ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 51s openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 120m Step 3: Verified expansion parameters are present in SC $ oc get sc ocs-storagecluster-ceph-rbd -o yaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: "2020-07-29T07:56:49Z" managedFields: - apiVersion: storage.k8s.io/v1 fieldsType: FieldsV1 fieldsV1: f:allowVolumeExpansion: {} f:parameters: .: {} f:clusterID: {} f:csi.storage.k8s.io/controller-expand-secret-name: {} f:csi.storage.k8s.io/controller-expand-secret-namespace: {} f:csi.storage.k8s.io/fstype: {} f:csi.storage.k8s.io/node-stage-secret-name: {} f:csi.storage.k8s.io/node-stage-secret-namespace: {} f:csi.storage.k8s.io/provisioner-secret-name: {} f:csi.storage.k8s.io/provisioner-secret-namespace: {} f:imageFeatures: {} f:imageFormat: {} f:pool: {} f:provisioner: {} f:reclaimPolicy: {} f:volumeBindingMode: {} manager: ocs-operator operation: Update time: "2020-07-29T07:56:49Z" name: ocs-storagecluster-ceph-rbd resourceVersion: "97816" selfLink: /apis/storage.k8s.io/v1/storageclasses/ocs-storagecluster-ceph-rbd uid: 29019d89-57a8-42c7-8ee5-32e98a6e106c parameters: clusterID: openshift-storage csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage csi.storage.k8s.io/fstype: ext4 csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage imageFeatures: layering imageFormat: "2" pool: ocs-storagecluster-cephblockpool provisioner: openshift-storage.rbd.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate $ oc get sc ocs-storagecluster-cephfs -o yaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: "2020-07-29T07:56:48Z" managedFields: - apiVersion: storage.k8s.io/v1 fieldsType: FieldsV1 fieldsV1: f:allowVolumeExpansion: {} f:parameters: .: {} f:clusterID: {} f:csi.storage.k8s.io/controller-expand-secret-name: {} f:csi.storage.k8s.io/controller-expand-secret-namespace: {} f:csi.storage.k8s.io/node-stage-secret-name: {} f:csi.storage.k8s.io/node-stage-secret-namespace: {} f:csi.storage.k8s.io/provisioner-secret-name: {} f:csi.storage.k8s.io/provisioner-secret-namespace: {} f:fsName: {} f:provisioner: {} f:reclaimPolicy: {} f:volumeBindingMode: {} manager: ocs-operator operation: Update time: "2020-07-29T07:56:48Z" name: ocs-storagecluster-cephfs resourceVersion: "97804" selfLink: /apis/storage.k8s.io/v1/storageclasses/ocs-storagecluster-cephfs uid: 09d3fddc-f766-419a-93b0-f844c233cf56 parameters: clusterID: openshift-storage csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage fsName: ocs-storagecluster-cephfilesystem provisioner: openshift-storage.cephfs.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate Step 4: Executed test case tests/manage/pv_services/pvc_resize/test_pvc_expansion.py::TestPvcExpand::test_pvc_expansion to verify expansion. Test case passed.* *Note : This bug is verified for the code change which recreates SC by deleting storageclusterinitialization. To be noted that we are not supporting PVC exapnsion in upgraded clusters. Step 4 is done for future reference, if any.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Container Storage 4.5.0 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:3754
Clone for the expected fix in future release (no manual WA) - https://bugzilla.redhat.com/show_bug.cgi?id=1872119
Removing AutomationBackLog keyword. Not in scope for regular regression test because the scenario is applicable only on clusters upgraded to OCS 4.5