Bug 2183687 - [Fusion-aaS][Backport to 4.12.3]failed to mount the the cephfs subvolume as subvolumegroup name is not sent in the GetStorageConfig RPC call
Summary: [Fusion-aaS][Backport to 4.12.3]failed to mount the the cephfs subvolume as s...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.12
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ODF 4.12.3
Assignee: Madhu Rajanna
QA Contact: Jilju Joy
URL:
Whiteboard:
Depends On: 2183155
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-04-01 05:34 UTC by Madhu Rajanna
Modified: 2023-08-09 17:00 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of: 2183155
Environment:
Last Closed: 2023-05-23 09:17:28 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 2019 0 None open Bug 2183687:[release-4.12] add subvolumegroupname to GetStorageClassClaimConfig RPC 2023-04-20 07:06:57 UTC
Red Hat Product Errata RHSA-2023:3265 0 None None None 2023-05-23 09:18:05 UTC

Description Madhu Rajanna 2023-04-01 05:34:10 UTC
+++ This bug was initially created as a clone of Bug #2183155 +++

Description of problem (please be detailed as possible and provide log
snippets):

As we moved to consumer mode to the new ocs-client-operator we won't be creating the Rook resources anymore in the consumer cluster. The provider should send the subvolumegroup details back to the consumer so that the CSI driver can use it to mount the existing/migrated volumes or the new volumes/pvc

Version of all relevant components (if applicable):


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?

Yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy ODF 4.12 in provider mode
2. Deploy ocs-client-operator in consumer mode
3. create cephfs storageclassclaim and PVC and bind it to application pod.


Actual results:


Expected results:

PVC should go to bound state and the application should be in a running state.

Additional info:

--- Additional comment from RHEL Program Management on 2023-03-30 13:05:54 UTC ---

This bug having no release flag set previously, is now set with release flag 'odf‑4.13.0' to '?', and so is being proposed to be fixed at the ODF 4.13.0 release. Note that the 3 Acks (pm_ack, devel_ack, qa_ack), if any previously set while release flag was missing, have now been reset since the Acks are to be set against a release flag.

--- Additional comment from RHEL Program Management on 2023-03-31 14:22:25 UTC ---

This BZ is being approved for ODF 4.13.0 release, upon receipt of the 3 ACKs (PM,Devel,QA) for the release flag 'odf‑4.13.0

--- Additional comment from RHEL Program Management on 2023-03-31 14:22:25 UTC ---

Since this bug has been approved for ODF 4.13.0 release, through release flag 'odf-4.13.0+', the Target Release is being set to 'ODF 4.13.0

Comment 15 errata-xmlrpc 2023-05-23 09:17:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat OpenShift Data Foundation 4.12.3 Security and Bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:3265


Note You need to log in before you can comment on or make changes to this bug.