Bug 2168540 - mon: prevent allocating snapids allocated for CephFS
Summary: mon: prevent allocating snapids allocated for CephFS
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 5.3
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.3z3
Assignee: Milind Changire
QA Contact: Hemanth Kumar
URL:
Whiteboard:
Depends On:
Blocks: 2203283
TreeView+ depends on / blocked
 
Reported: 2023-02-09 11:05 UTC by Milind Changire
Modified: 2023-05-23 00:19 UTC (History)
8 users (show)

Fixed In Version: ceph-16.2.10-165.el8cp
Doc Type: Bug Fix
Doc Text:
.Prevent unintentional data loss during snap deletion Previously, due to a namespace collision between pool-level snaps and fs-level snaps, there would be a conflict in identifying the namespace to which a snap-ID was associated when snap-ID was passed to the monitor during snap deletion. This was because the snaps, although had independent namespaces, were managed by the Ceph Monitor. With this fix, the pool-level snap creation for a pool attached to a Ceph File System (CephFS) is disabled or attaching a pool to a CephFS is disallowed when the pool already has pool-level snaps created. This prevents unintentional loss of data when snaps are deleted from one namespace when the other namespace was implied by the user.
Clone Of:
Environment:
Last Closed: 2023-05-23 00:19:10 UTC
Embargoed:
hyelloji: needinfo+
hyelloji: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-6118 0 None None None 2023-02-09 11:05:47 UTC
Red Hat Product Errata RHBA-2023:3259 0 None None None 2023-05-23 00:19:46 UTC

Description Milind Changire 2023-02-09 11:05:09 UTC
Description of problem:
The MDS allocates its own snapids. In general, the monitor allocates self-managed snapids for librados users.

We need to prevent this colliding. Probably by disabling monitor snapid allocation on pools in the FSMap.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 16 errata-xmlrpc 2023-05-23 00:19:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.3 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:3259


Note You need to log in before you can comment on or make changes to this bug.