Bug 1848494 - pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume
Summary: pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independe...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 4.1
Hardware: All
OS: All
high
high
Target Milestone: z2
: 4.1
Assignee: Shyamsundar
QA Contact: Hemanth Kumar
Aron Gunn
URL:
Whiteboard:
Depends On:
Blocks: 1816167 1854501 1859464
TreeView+ depends on / blocked
 
Reported: 2020-06-18 13:11 UTC by Shyamsundar
Modified: 2020-09-30 17:26 UTC (History)
13 users (show)

Fixed In Version: ceph-14.2.8-100.el8cp, ceph-14.2.8-100.el7cp
Doc Type: Enhancement
Doc Text:
.Independent life-cycle operations for subvolumes and subvolume snapshots Because the CSI protocol treats snapshots as first class objects, this requires source subvolumes and subvolume snapshots to operate independently of each other. Since the Kubernetes storage interface uses the CSI protocol, subvolume removal with a snapshot retention option (`--retain-snapshots`) has been implemented. This allows other life-cycle operations on a retained snapshot to proceed appropriately.
Clone Of:
: 1854501 1859464 (view as bug list)
Environment:
Last Closed: 2020-09-30 17:25:42 UTC
Embargoed:
hyelloji: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 46821 0 None None None 2020-08-03 18:22:05 UTC
Red Hat Product Errata RHBA-2020:4144 0 None None None 2020-09-30 17:26:08 UTC

Description Shyamsundar 2020-06-18 13:11:42 UTC
From the perspective of CSI and its volume life cycle management, a snapshot of a volume is expected to survive beyond the volume itself. IOW, the volume maybe deleted and later recreated from one of its prior snapshots.

Although, the CSI protocol has changed over time to allow snapshots to depend on their sources, and disallowing source volume deletion if snapshots exists, it is not a natural flow of events and life cycle management operations.

It is hence desired that snapshots remain independent from the source subvolume, to aid such life cycle operations as detailed above.

With CephFS subvolume snapshots are taken at the directory level of the subvolume, and hence are dependent on the subvolume. To delete the subvolume it is required that all snapshots within the subvolume are deleted first. This breaks the above desired state.

This bug is created to track the fix for the same, to land with the RHCS Ceph version that would be in use for OCS 4.6. The upstream Ceph tracker for the same is: https://tracker.ceph.com/issues/45729

Is there any workaround available to the best of your knowledge?

Workarounds and ability to proceed with developing the feature in ceph-csi even without this feature in place is covered here: https://github.com/ceph/ceph-csi/issues/702#issuecomment-638213533

Comment 4 Yaniv Kaul 2020-07-15 17:26:39 UTC
Can we get a QA_ACK?

Comment 16 errata-xmlrpc 2020-09-30 17:25:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 4.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4144


Note You need to log in before you can comment on or make changes to this bug.