Bug 2091998
| Summary: | Volume Snapshots not work with external restricted mode | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Parth Arora <paarora> |
| Component: | ocs-operator | Assignee: | Parth Arora <paarora> |
| Status: | CLOSED ERRATA | QA Contact: | Vijay Avuthu <vavuthu> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.11 | CC: | jarrpa, madam, muagarwa, nigoyal, ocs-bugs, odf-bz-bot, sostapov |
| Target Milestone: | --- | ||
| Target Release: | ODF 4.11.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-08-24 13:54:12 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Parth Arora
2022-05-31 13:56:49 UTC
nigoyal any update on this, has anybody started working on it, or should I go and try to have a fix for it. I don't understand the use case, could we: - Reference what "restricted auth mode" means in this context? - Explain steps of the reproducer (assuming the reference above doesn't provide enough details)? Probably we can say it as a requirement, "restricted auth mode" means restricting csi-users to per cluster and pool, will be available to users from 4.11 (https://bugzilla.redhat.com/show_bug.cgi?id=2069314) Try to create a cluster with a restricted mode and the volume snapshot will not work. Since this is an issue with accepted feature/bugfix tracked in BZ 2069314 and the bug has a proposed fix, I'm providing QA ack and assigning the bug to the QA contact of BZ 2069314. Deployment with restricted auth mode: https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/14657/console > created PVC for both rbd and cepfs > take volume snapshot for both from UI > check volumesnapshots $ oc get vs NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE cephfs-pvc-snapshot true cephfs-pvc 1Gi ocs-external-storagecluster-cephfsplugin-snapclass snapcontent-c47ff0b6-a3b6-4108-bcdd-695ca8d50320 2d20h 2d20h rbd-pvc-snapshot true rbd-pvc 1Gi ocs-external-storagecluster-rbdplugin-snapclass snapcontent-8f9cd820-54b6-4403-b910-fb970082f4b0 2d20h 2d20h > also ran tests/manage/pv_services/pvc_snapshot/test_pvc_snapshot.py here: https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/14728/consoleFull Passed tests/manage/pv_services/pvc_snapshot/test_pvc_snapshot.py::TestPvcSnapshot::test_pvc_snapshot[CephBlockPool] 1. Run I/O on a pod file. 2. Calculate md5sum of the file. 3. Take a snapshot of the PVC. 4. Create a new PVC out of that snapshot. 5. Attach a new pod to it. 6. Verify that the file is present on the new pod also. 7. Verify that the md5sum of the file on the new pod matches with the md5sum of the file on the original pod. Args: interface(str): The type of the interface (e.g. CephBlockPool, CephFileSystem) pvc_factory: A fixture to create new pvc teardown_factory: A fixture to destroy objects 215.56 Log File Passed tests/manage/pv_services/pvc_snapshot/test_pvc_snapshot.py::TestPvcSnapshot::test_pvc_snapshot[CephFileSystem] 1. Run I/O on a pod file. 2. Calculate md5sum of the file. 3. Take a snapshot of the PVC. 4. Create a new PVC out of that snapshot. 5. Attach a new pod to it. 6. Verify that the file is present on the new pod also. 7. Verify that the md5sum of the file on the new pod matches with the md5sum of the file on the original pod. Args: interface(str): The type of the interface (e.g. CephBlockPool, CephFileSystem) pvc_factory: A fixture to create new pvc teardown_factory: A fixture to destroy objects Moving to Verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:6156 |