Bug 1977022
| Summary: | Snapshot page is showing wrong Source reference if a new PVC is created with the name of snapshot's source PVC | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Jilju Joy <jijoy> |
| Component: | csi-driver | Assignee: | Rakshith <rar> |
| Status: | CLOSED WONTFIX | QA Contact: | Elad <ebenahar> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.8 | CC: | anbehl, aos-bugs, muagarwa, ndevos, nobody, nthomas, ocs-bugs, odf-bz-bot, rar |
| Target Milestone: | --- | Flags: | hchiramm:
needinfo?
(jijoy) hchiramm: needinfo? (anbehl) |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-03-20 10:43:04 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Jilju Joy
2021-06-28 18:41:07 UTC
Same issue is seen for storageclass reference in the PVC details page if the storageclass used for the PVC is deleted and a new storageclass is created with the same name. The parameters of the new SC can be different from the old SC. So this issue arises in all cases where the reference is provided by name. (In reply to Jilju Joy from comment #1) > Same issue is seen for storageclass reference in the PVC details page if the > storageclass used for the PVC is deleted and a new storageclass is created > with the same name. The parameters of the new SC can be different from the > old SC. > So this issue arises in all cases where the reference is provided by name. This is as expected! If you delete the original class and create a new class with the same name and different parameters the pointer of the PVC changes to a new class and that only supports the parameter in the new storage class. I agree with snapshot this needs to be handled somehow. Is it possible to use uid as the reference rather than using the resource name. In that way we will get a unique value. @jilju I checked the YAML for snapshots and there is no hash value other than UUID that enables us to do so. Furthermore, even if there was, it would require the same hash to be present in the PVC specs so associations could be formed without any preprocessing (hence covering the corner case where operations are performed by the CLI first before coming back to the UI). This would also cover the second case because if the same hash was present in the PVC spec itself, then it would eliminate the need to parse the volume hash it's bound to, so it will then work in static as well as dynamically provisioned cases. We can't propose any feasible solution without the aforementioned changes in the backend first. Hence, moving this to CSI. Thanks. Existing issue, not a 4.8 blocker. Here is what I think about this issue: As long as the reference of snapshot getting mapped to PVC name , this looks like an expected behaviour. That said, if PVC got recreated, the reference resolves as 'we/ocp console' used PVC name as the mapping point between these objects. Rather than a core issue, I feel, this is a rendering issue at ODF/OCP console. Every snapshot has a volumesnapshotcontent mapped in backend. The volumesnapshotcontent has a 'volumehandleref' in it under "spec->source->VolumeHandle" Here the volumeHandle is the unique identifier of the volume created on the storage backend and returned by the CSI driver during the volume creation. This field is required for dynamically provisioning a snapshot. It specifies the volume source of the snapshot. if I understand previous comments, UI maintain the hash table which has been made with 'PVC name to Snapshot mapping', Can we make use of this 'volumeHandle' field for resolving this issue in a proper way? please revert if I missed any UI workflows here. As per the discussion with the Ceph-csi team. We figured out that the only solution right now is the one that @Humble mentioned (https://bugzilla.redhat.com/show_bug.cgi?id=1977022#c14) above but it is not an optimal solution from the UI perspective. To get the mapping between the PVC and the snapshot UI needs to get 4 different APIs to understand if the PVC is the original one from which the snapshot was created or not. Steps required to get the mapping of PVC and snapshot as per current implementation: 1. We need to get the volumesnapshot resource to get the volumeSnapshotcontent name. 2. Then we need to get volumesnapshotContent resource to get the volume handle name(spec->source->VolumeHandle). 3. Once we have that then we need to get the PVC name from the volumesnapshot and pull the PVC resource to get the PV name. 4. From the PV resource then we get the volume handle name(spec->csi->volumeHandle). and at last, we need to do the equality check to figure out if the mapping is correct. This is not a scalable approach if we have 1000's of snapshots created then we need to poll 4x API just to get this mapping from snapshot to PVC. Proposed solution: While creating the snapshot if k8s backend can pass UUID with the PVC name to find the uniqueness of the resource. If that UUID is not found in the PVC resource name then we/UI can say that this PVC doesn't belong to this snapshot. (In reply to Ankush Behl from comment #17) > As per the discussion with the Ceph-csi team. We figured out that the only > solution right now is the one that @Humble mentioned > (https://bugzilla.redhat.com/show_bug.cgi?id=1977022#c14) above but it is > not an optimal solution from the UI perspective. > > To get the mapping between the PVC and the snapshot UI needs to get 4 > different APIs to understand if the PVC is the original one from which the > snapshot was created or not. > > Steps required to get the mapping of PVC and snapshot as per current > implementation: > 1. We need to get the volumesnapshot resource to get the > volumeSnapshotcontent name. > 2. Then we need to get volumesnapshotContent resource to get the volume > handle name(spec->source->VolumeHandle). > 3. Once we have that then we need to get the PVC name from the > volumesnapshot and pull the PVC resource to get the PV name. > 4. From the PV resource then we get the volume handle > name(spec->csi->volumeHandle). > > and at last, we need to do the equality check to figure out if the mapping > is correct. > > This is not a scalable approach if we have 1000's of snapshots created then > we need to poll 4x API just to get this mapping from snapshot to PVC. > > Proposed solution: > While creating the snapshot if k8s backend can pass UUID with the PVC name > to find the uniqueness of the resource. If that UUID is not found in the PVC > resource name then we/UI can say that this PVC doesn't belong to this > snapshot. Ankush, Thanks for summarizing the discussions so far. let me check other methods/possibilities here. The upstream issue is still open with no progress, moving it out. Humble, please update the BZ with the summary of the discussion happened in upstream community. The issue is closed now, any chance we will fix this BZ or should we close it as WONT_FIX. I will reopen the discussions on this once again in upstream. Regardless this is not going to be a 4.11 item. I have removed 4.11 flag and marking it for 4.12. The linked upstream Kubernetes issue was closed because it went stale. This won't be addressed in time for ODF-4.12, moving out to 4.13. Considering it is an API change, lengthy discussions are needed and also has to be backed with proper justification ..etc. One difficulty here is, having another way exist here to track the instance and recreation of instance with same name ..etc are bit awkward/does not fall into the general path of operation. Regardless, I am pushing once again in upstream, so keeping this bz open. Meanwhile if QE/UI team think ( added needinfo for the same) this is no longer needed to be prioritized, please mention. Considering, with the same state we have been serving ODF customers for many releases and havent seen any customer issues to attach or link here, I think this is totally a non priority or not a worth enhancement. |