+++ This bug was initially created as a clone of Bug #2189787 +++ Description of problem: ----------------------- While verifying BZ - https://bugzilla.redhat.com/show_bug.cgi?id=2168541 found out that there are 2 ways to create a snap on rados pools, the fix was provided when "rados mksnap" is used, with "ceph osd pool mksnap" , it still allows to create a snap on the pool part of any existing file Version-Release number of selected component (if applicable): ---------- ceph version 17.2.6-25.el9cp How reproducible: ---------------- Always Steps : ------- [root@magna021 ~]# ceph fs ls name: cephfs, metadata pool: cephfs.cephfs.meta, data pools: [cephfs.cephfs.data ] name: newfs, metadata pool: cephfs.newfs.meta, data pools: [cephfs.newfs.data ] [root@magna021 ~]# ceph osd lspools 1 .mgr 2 cephfs.cephfs.meta 3 cephfs.cephfs.data 4 .nfs 5 cephfs.newfs.meta 6 cephfs.newfs.data [root@magna021 ~]# ceph osd pool mksnap cephfs.newfs.meta snap1 created pool cephfs.newfs.meta snap snap1 [root@magna021 ~]# ceph osd pool mksnap cephfs.newfs.data snap2 created pool cephfs.newfs.data snap snap2 [root@magna021 ~]# rados lssnap -p cephfs.newfs.meta 1 snap1 2023.04.24 19:04:39 1 snaps [root@magna021 ~]# rados lssnap -p cephfs.newfs.data 1 snap2 2023.04.24 19:04:47 1 snaps --- Additional comment from Venky Shankar on 2023-04-27 11:01:14 IST --- Milind, please clone one for 5.3. --- Additional comment from Milind Changire on 2023-07-11 22:18:57 IST --- MR - https://gitlab.cee.redhat.com/ceph/ceph/-/merge_requests/309
*** This bug has been marked as a duplicate of bug 2203746 ***