From the perspective of CSI and its volume life cycle management, a snapshot of a volume is expected to survive beyond the volume itself. IOW, the volume maybe deleted and later recreated from one of its prior snapshots. Although, the CSI protocol has changed over time to allow snapshots to depend on their sources, and disallowing source volume deletion if snapshots exists, it is not a natural flow of events and life cycle management operations. It is hence desired that snapshots remain independent from the source subvolume, to aid such life cycle operations as detailed above. With CephFS subvolume snapshots are taken at the directory level of the subvolume, and hence are dependent on the subvolume. To delete the subvolume it is required that all snapshots within the subvolume are deleted first. This breaks the above desired state. This bug is created to track the fix for the same, to land with the RHCS Ceph version that would be in use for OCS 4.6. The upstream Ceph tracker for the same is: https://tracker.ceph.com/issues/45729 Is there any workaround available to the best of your knowledge? Workarounds and ability to proceed with developing the feature in ceph-csi even without this feature in place is covered here: https://github.com/ceph/ceph-csi/issues/702#issuecomment-638213533
Can we get a QA_ACK?
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 4.1 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4144