Bug 2091998 - Volume Snapshots not work with external restricted mode
Summary: Volume Snapshots not work with external restricted mode
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.11.0
Assignee: Parth Arora
QA Contact: Vijay Avuthu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-05-31 13:56 UTC by Parth Arora
Modified: 2023-08-09 17:00 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-08-24 13:54:12 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 1724 0 None open ocs: Update Volume Snapshots secret name generation 2022-06-21 04:22:54 UTC
Github red-hat-storage ocs-operator pull 1747 0 None open Bug 2091998: [release-4.11] ocs: Update Volume Snapshots secret name generation 2022-07-05 06:38:45 UTC
Red Hat Product Errata RHSA-2022:6156 0 None None None 2022-08-24 13:54:26 UTC

Description Parth Arora 2022-05-31 13:56:49 UTC
Description of problem (please be detailed as possible and provide log
snippests):

For the external odf cluster if we use the restricted auth mode, which will create restricted users/secrets for the storage cluster,
The Volume snapshots are not working as aspected,

This is mainly due to we have some hardcoded values in the volume snapshot class, that will not update the secret name that is provided with the storage class.


Version of all relevant components (if applicable):


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.
2.
3.


Actual results:


Expected results:


Additional info:

Comment 2 Parth Arora 2022-06-06 11:10:54 UTC
nigoyal any update on this, has anybody started working on it, or should I go and try to have a fix for it.

Comment 4 Martin Bukatovic 2022-06-17 15:23:51 UTC
I don't understand the use case, could we:

- Reference what "restricted auth mode" means in this context?
- Explain steps of the reproducer (assuming the reference above doesn't provide enough details)?

Comment 5 Parth Arora 2022-06-17 19:26:55 UTC
Probably we can say it as a requirement,

"restricted auth mode" means restricting csi-users to per cluster and pool, will be available to users from 4.11 (https://bugzilla.redhat.com/show_bug.cgi?id=2069314)

Try to create a cluster with a restricted mode and the volume snapshot will not work.

Comment 6 Martin Bukatovic 2022-06-21 14:46:11 UTC
Since this is an issue with accepted feature/bugfix tracked in BZ 2069314 and the bug has a proposed fix, I'm providing QA ack and assigning the bug to the QA contact of BZ 2069314.

Comment 11 Vijay Avuthu 2022-07-18 10:11:16 UTC
Deployment with restricted auth mode: https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/14657/console

> created PVC for both rbd and cepfs
> take volume snapshot for both from UI
> check volumesnapshots

$ oc get vs
NAME                  READYTOUSE   SOURCEPVC    SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS                                        SNAPSHOTCONTENT                                    CREATIONTIME   AGE
cephfs-pvc-snapshot   true         cephfs-pvc                           1Gi           ocs-external-storagecluster-cephfsplugin-snapclass   snapcontent-c47ff0b6-a3b6-4108-bcdd-695ca8d50320   2d20h          2d20h
rbd-pvc-snapshot      true         rbd-pvc                              1Gi           ocs-external-storagecluster-rbdplugin-snapclass      snapcontent-8f9cd820-54b6-4403-b910-fb970082f4b0   2d20h          2d20h


> also ran tests/manage/pv_services/pvc_snapshot/test_pvc_snapshot.py here: https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/14728/consoleFull

Passed 	tests/manage/pv_services/pvc_snapshot/test_pvc_snapshot.py::TestPvcSnapshot::test_pvc_snapshot[CephBlockPool] 	1. Run I/O on a pod file. 2. Calculate md5sum of the file. 3. Take a snapshot of the PVC. 4. Create a new PVC out of that snapshot. 5. Attach a new pod to it. 6. Verify that the file is present on the new pod also. 7. Verify that the md5sum of the file on the new pod matches with the md5sum of the file on the original pod. Args: interface(str): The type of the interface (e.g. CephBlockPool, CephFileSystem) pvc_factory: A fixture to create new pvc teardown_factory: A fixture to destroy objects 	215.56 	Log File
Passed 	tests/manage/pv_services/pvc_snapshot/test_pvc_snapshot.py::TestPvcSnapshot::test_pvc_snapshot[CephFileSystem] 	1. Run I/O on a pod file. 2. Calculate md5sum of the file. 3. Take a snapshot of the PVC. 4. Create a new PVC out of that snapshot. 5. Attach a new pod to it. 6. Verify that the file is present on the new pod also. 7. Verify that the md5sum of the file on the new pod matches with the md5sum of the file on the original pod. Args: interface(str): The type of the interface (e.g. CephBlockPool, CephFileSystem) pvc_factory: A fixture to create new pvc teardown_factory: A fixture to destroy objects 

Moving to Verified

Comment 13 errata-xmlrpc 2022-08-24 13:54:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:6156


Note You need to log in before you can comment on or make changes to this bug.