Bug 2203746

Summary: mon: 'ceph osd pool mksnap' still allows snaps to be created for fs pools
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Milind Changire <mchangir>
Component: CephFSAssignee: Milind Changire <mchangir>
Status: CLOSED ERRATA QA Contact: Hemanth Kumar <hyelloji>
Severity: medium Docs Contact: Akash Raj <akraj>
Priority: unspecified    
Version: 5.3CC: akraj, ceph-eng-bugs, cephqe-warriors, gfarnum, hyelloji, rmandyam, tserlin, vshankar
Target Milestone: ---   
Target Release: 5.3z5   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-16.2.10-203.el8cp Doc Type: Bug Fix
Doc Text:
.Creation of pool-level snaps for pools actively associated with a filesystem is disallowed Previously, the `ceph osd pool mksnap` command allowed the creation of pool-level snaps for pools actively associated with a filesystem. Due to this, there would be possible data loss when snapshots were deleted from either the filesystem or the pool due to pool ID collision. With this fix, creation of pool-level snaps for pools actively associated with a filesystem is disallowed and no data loss occurs.
Story Points: ---
Clone Of: 2189787 Environment:
Last Closed: 2023-08-28 09:40:56 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2189787, 2222047    
Bug Blocks:    

Description Milind Changire 2023-05-15 07:26:10 UTC
+++ This bug was initially created as a clone of Bug #2189787 +++

Description of problem:
-----------------------
While verifying BZ - https://bugzilla.redhat.com/show_bug.cgi?id=2168541 found out that there are 2 ways to create a snap on rados pools, the fix was provided when "rados mksnap" is used, with "ceph osd pool mksnap" , it still allows to create a snap on the pool part of any existing file

Version-Release number of selected component (if applicable):
----------
ceph version 17.2.6-25.el9cp

How reproducible:
----------------
Always


Steps :
-------
[root@magna021 ~]# ceph fs ls
name: cephfs, metadata pool: cephfs.cephfs.meta, data pools: [cephfs.cephfs.data ]
name: newfs, metadata pool: cephfs.newfs.meta, data pools: [cephfs.newfs.data ]

[root@magna021 ~]# ceph osd lspools
1 .mgr
2 cephfs.cephfs.meta
3 cephfs.cephfs.data
4 .nfs
5 cephfs.newfs.meta
6 cephfs.newfs.data

[root@magna021 ~]# ceph osd pool mksnap cephfs.newfs.meta snap1
created pool cephfs.newfs.meta snap snap1

[root@magna021 ~]# ceph osd pool mksnap cephfs.newfs.data snap2
created pool cephfs.newfs.data snap snap2

[root@magna021 ~]# rados lssnap -p cephfs.newfs.meta
1       snap1   2023.04.24 19:04:39
1 snaps

[root@magna021 ~]# rados lssnap -p cephfs.newfs.data
1       snap2   2023.04.24 19:04:47
1 snaps

--- Additional comment from Venky Shankar on 2023-04-27 11:01:14 IST ---

Milind, please clone one for 5.3.

Comment 3 Hemanth Kumar 2023-07-25 17:01:28 UTC
*** Bug 2222047 has been marked as a duplicate of this bug. ***

Comment 11 errata-xmlrpc 2023-08-28 09:40:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.3 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:4760