Bug 2104616 - [TestOnly] Upgrade testing for ceph-mgr interface use in the CephFS driver
Summary: [TestOnly] Upgrade testing for ceph-mgr interface use in the CephFS driver
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-manila
Version: 17.1 (Wallaby)
Hardware: All
OS: All
urgent
medium
Target Milestone: ga
: 17.1
Assignee: Goutham Pacha Ravi
QA Contact: lkuchlan
Jenny-Anne Lynch
URL:
Whiteboard:
Depends On: 1890531 2224351
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-07-06 17:24 UTC by Goutham Pacha Ravi
Modified: 2024-03-22 14:54 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-03-22 14:54:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-16280 0 None None None 2022-07-06 17:37:07 UTC

Description Goutham Pacha Ravi 2022-07-06 17:24:37 UTC
This bug was initially created as a copy of Bug #1767084


In stable/wallaby or OSP 17.0, the CephFS driver in Manila uses ceph-mgr API instead of Ceph-volume client [1][2]. Shares with the ceph-mgr API are created as "version 2" subvolumes [3]. From the docstring in [3], 


"""
    Version 2 subvolumes creates a subvolume with path as follows,
        volumes/<group-name>/<subvolume-name>/<uuid>/
    The distinguishing feature of V2 subvolume as compared to V1 subvolumes is its ability to retain snapshots
    of a subvolume on removal. This is done by creating snapshots under the <subvolume-name> directory,
    rather than under the <uuid> directory, as is the case of V1 subvolumes.
    - The directory under which user data resides is <uuid>
    - Snapshots of the subvolume are taken within the <subvolume-name> directory
    - A meta file is maintained under the <subvolume-name> directory as a metadata store, storing information similar
    to V1 subvolumes
    - On a request to remove subvolume but retain its snapshots, only the <uuid> directory is moved to trash, retaining
    the rest of the subvolume and its meta file.
        - The <uuid> directory, when present, is the current incarnation of the subvolume, which may have snapshots of
        older incarnations of the same subvolume.
    - V1 subvolumes that currently do not have any snapshots are upgraded to V2 subvolumes automatically, to support the
    snapshot retention feature
    """


Shares created via the ceph volume client were V1 subvolumes. So, after an upgrade from OSP 16.x, existing shares will continue to remain V1 subvolumes whose path would be [4]: 

     volumes/<group-name>/<subvolume-name>/<uuid>/


Ceph will continue to work with V1 subvolumes (existing shares from OSP 16.x) and V2 subvolumes (new shares in OSP 17.x+)

We will need to test that there is no disruption to clients - i.e., existing share export paths must remain the same as in OSP 16.x and access to them must not be disrupted (we should start an autonomous write workload that must continue to work through the OSP upgrade). 


[1] https://github.com/ceph/ceph/tree/v15.0.0/src/ceph-volume
[2] https://opendev.org/openstack/manila/src/commit/de89e124891e7a1e4709cfb91ec701554a716906/manila/share/drivers/cephfs/driver.py#L36-L41
[3] https://github.com/ceph/ceph/blob/52185a10764841c31b50d4a89496db5dfeb9fb35/src/pybind/mgr/volumes/fs/operations/versions/subvolume_v2.py 
[4] https://github.com/ceph/ceph/blob/52185a10764841c31b50d4a89496db5dfeb9fb35/src/pybind/mgr/volumes/fs/operations/versions/subvolume_v1.py#L33-L45


Version-Release number of selected component (if applicable): 17 (z stream)


Note You need to log in before you can comment on or make changes to this bug.