Bug 2264900
Summary: | PVC cloning is failing with error "RBD image not found" | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | nijin ashok <nashok> |
Component: | csi-driver | Assignee: | Rakshith <rar> |
Status: | CLOSED ERRATA | QA Contact: | Yuli Persky <ypersky> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.14 | CC: | alitke, asriram, kbg, kramdoss, muagarwa, odf-bz-bot, rar, tdesala, ypersky |
Target Milestone: | --- | ||
Target Release: | ODF 4.16.0 | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | 4.16.0-86 | Doc Type: | Bug Fix |
Doc Text: |
.PVC cloning failed with an error "RBD image not found"
Previously, volume snapshot restore failed when the parent of the snapshot did not exist as CephCSI driver falsely identified an RBD image in trash to exist due to a bug in the driver.
With this fix, the CephCSI driver bug is fixed to identify the images in trash appropriately and as a result, the volume snapshot is restored successfully even when the parent of the snapshot does not exist.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2024-07-17 13:14:03 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 2260844 |
Description
nijin ashok
2024-02-19 15:50:13 UTC
Adding Looks like the bug is still not fixed in 4.16.0-94. I've tried the first 4 steps out of the following (scenario specified in comment#3) - Create PVC ( created pvc1) - Create Snapshot ) created pvc1snap1) - Delete PVC ( deleted pvc1) - Restore Snapshot into pvc-restore ( tried to restore pvc1snap1 to a new pvc) Expected result: the restore action should be enabled in the UI and should work. Actual result: the option to restore is greyed out. I did check whether it is possible to restore from a snapshot when the initial pvc is not deleted. Due to comments #9 and #10 the BZ has failed QA and was reopened (changed the status to Assigned). @Rakshith, I've deployed another 4.16 cluster and reproduced this scenario again with TWX block mode pvc, and the problem was reproduced once again. To be more specific: 1) I've created RWX block mode pvc ( ypersky-pvc1) 2) I've successfully created a snapshot ( ypersky=pvc1-snapshot1) 3) I've successfully restored this snapshot ypersky=pvc1-snapshot1 to PVC ( ypersky-pvc1-snapshot1-restore1) ( to make sure that Restore is possible when the initial pvc is not deleted). 4) I've deleted ypersky-pvc1 5) When I try to restore again from ypersky=pvc1-snapshot1 - the option of Restore is greyed out as in the attached print screen. I did try to change access mode to any of the RWO, RWX, ROX - for each one of those options the Restore is not possible ( greyed out). You are welcome to check on this cluster : https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-simple-deploy-odf-cluster/66/ The above cluster will be available for a few more days. Reopening the BZ again. As for the BZ verification on the cli - it is possible to restore a pvc from snapshot on 4.16.0-94, when the parent PVC is deleted scenario: 1) create pvc1 2) create ypersky-pvc1-snapshot1 3) delete pvc1 4) create pvc1-1-snap1-restore with the following command: oc create -f <restore_yaml> While the content of the yaml file is: --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1-snapshot1-restore1-cli namespace: default spec: accessModes: - ReadWriteMany dataSource: apiGroup: snapshot.storage.k8s.io kind: VolumeSnapshot name: ypersky-pvc1-snapshot1 resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-ceph-rbd volumeMode: Block Moving this BZ to verified state and will open a new BZ for the UI. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.16.0 security, enhancement & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:4591 |