Bug 2189787

Summary: mon: 'ceph osd pool mksnap' still allows snaps to be created for fs pools
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Hemanth Kumar <hyelloji>
Component: CephFSAssignee: Milind Changire <mchangir>
Status: CLOSED ERRATA QA Contact: Hemanth Kumar <hyelloji>
Severity: high Docs Contact: Akash Raj <akraj>
Priority: unspecified    
Version: 6.1CC: akraj, ceph-eng-bugs, cephqe-warriors, gfarnum, mchangir, tserlin, vshankar
Target Milestone: ---   
Target Release: 6.1z1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-17.2.6-88.el9cp Doc Type: Bug Fix
Doc Text:
.Creation of pool-level snaps for pools actively associated with a filesystem is disallowed Previously, the `ceph osd pool mksnap` command allowed the creation of pool-level snaps for pools actively associated with a filesystem. Due to this, there would be possible data loss when snapshots were deleted from either the filesystem or the pool due to pool ID collision. With this fix, creation of pool-level snaps for pools actively associated with a filesystem is disallowed and no data loss occurs.
Story Points: ---
Clone Of:
: 2203746 2222047 (view as bug list) Environment:
Last Closed: 2023-08-03 16:45:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2203746, 2221020, 2222047    

Description Hemanth Kumar 2023-04-26 07:45:06 UTC
Description of problem:
-----------------------
While verifying BZ - https://bugzilla.redhat.com/show_bug.cgi?id=2168541 found out that there are 2 ways to create a snap on rados pools, the fix was provided when "rados mksnap" is used, with "ceph osd pool mksnap" , it still allows to create a snap on the pool part of any existing file

Version-Release number of selected component (if applicable):
----------
ceph version 17.2.6-25.el9cp

How reproducible:
----------------
Always


Steps :
-------
[root@magna021 ~]# ceph fs ls
name: cephfs, metadata pool: cephfs.cephfs.meta, data pools: [cephfs.cephfs.data ]
name: newfs, metadata pool: cephfs.newfs.meta, data pools: [cephfs.newfs.data ]

[root@magna021 ~]# ceph osd lspools
1 .mgr
2 cephfs.cephfs.meta
3 cephfs.cephfs.data
4 .nfs
5 cephfs.newfs.meta
6 cephfs.newfs.data

[root@magna021 ~]# ceph osd pool mksnap cephfs.newfs.meta snap1
created pool cephfs.newfs.meta snap snap1

[root@magna021 ~]# ceph osd pool mksnap cephfs.newfs.data snap2
created pool cephfs.newfs.data snap snap2

[root@magna021 ~]# rados lssnap -p cephfs.newfs.meta
1       snap1   2023.04.24 19:04:39
1 snaps

[root@magna021 ~]# rados lssnap -p cephfs.newfs.data
1       snap2   2023.04.24 19:04:47
1 snaps

Comment 9 errata-xmlrpc 2023-08-03 16:45:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:4473

Comment 10 Red Hat Bugzilla 2023-12-02 04:26:32 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days