Description of problem (please be detailed as possible and provide log snippests): For the external odf cluster if we use the restricted auth mode, which will create restricted users/secrets for the storage cluster, The Volume snapshots are not working as aspected, This is mainly due to we have some hardcoded values in the volume snapshot class, that will not update the secret name that is provided with the storage class. Version of all relevant components (if applicable): Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
nigoyal any update on this, has anybody started working on it, or should I go and try to have a fix for it.
I don't understand the use case, could we: - Reference what "restricted auth mode" means in this context? - Explain steps of the reproducer (assuming the reference above doesn't provide enough details)?
Probably we can say it as a requirement, "restricted auth mode" means restricting csi-users to per cluster and pool, will be available to users from 4.11 (https://bugzilla.redhat.com/show_bug.cgi?id=2069314) Try to create a cluster with a restricted mode and the volume snapshot will not work.
Since this is an issue with accepted feature/bugfix tracked in BZ 2069314 and the bug has a proposed fix, I'm providing QA ack and assigning the bug to the QA contact of BZ 2069314.
Deployment with restricted auth mode: https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/14657/console > created PVC for both rbd and cepfs > take volume snapshot for both from UI > check volumesnapshots $ oc get vs NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE cephfs-pvc-snapshot true cephfs-pvc 1Gi ocs-external-storagecluster-cephfsplugin-snapclass snapcontent-c47ff0b6-a3b6-4108-bcdd-695ca8d50320 2d20h 2d20h rbd-pvc-snapshot true rbd-pvc 1Gi ocs-external-storagecluster-rbdplugin-snapclass snapcontent-8f9cd820-54b6-4403-b910-fb970082f4b0 2d20h 2d20h > also ran tests/manage/pv_services/pvc_snapshot/test_pvc_snapshot.py here: https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/14728/consoleFull Passed tests/manage/pv_services/pvc_snapshot/test_pvc_snapshot.py::TestPvcSnapshot::test_pvc_snapshot[CephBlockPool] 1. Run I/O on a pod file. 2. Calculate md5sum of the file. 3. Take a snapshot of the PVC. 4. Create a new PVC out of that snapshot. 5. Attach a new pod to it. 6. Verify that the file is present on the new pod also. 7. Verify that the md5sum of the file on the new pod matches with the md5sum of the file on the original pod. Args: interface(str): The type of the interface (e.g. CephBlockPool, CephFileSystem) pvc_factory: A fixture to create new pvc teardown_factory: A fixture to destroy objects 215.56 Log File Passed tests/manage/pv_services/pvc_snapshot/test_pvc_snapshot.py::TestPvcSnapshot::test_pvc_snapshot[CephFileSystem] 1. Run I/O on a pod file. 2. Calculate md5sum of the file. 3. Take a snapshot of the PVC. 4. Create a new PVC out of that snapshot. 5. Attach a new pod to it. 6. Verify that the file is present on the new pod also. 7. Verify that the md5sum of the file on the new pod matches with the md5sum of the file on the original pod. Args: interface(str): The type of the interface (e.g. CephBlockPool, CephFileSystem) pvc_factory: A fixture to create new pvc teardown_factory: A fixture to destroy objects Moving to Verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:6156