+++ This bug was initially created as a clone of Bug #2189787 +++ Description of problem: ----------------------- While verifying BZ - https://bugzilla.redhat.com/show_bug.cgi?id=2168541 found out that there are 2 ways to create a snap on rados pools, the fix was provided when "rados mksnap" is used, with "ceph osd pool mksnap" , it still allows to create a snap on the pool part of any existing file Version-Release number of selected component (if applicable): ---------- ceph version 17.2.6-25.el9cp How reproducible: ---------------- Always Steps : ------- [root@magna021 ~]# ceph fs ls name: cephfs, metadata pool: cephfs.cephfs.meta, data pools: [cephfs.cephfs.data ] name: newfs, metadata pool: cephfs.newfs.meta, data pools: [cephfs.newfs.data ] [root@magna021 ~]# ceph osd lspools 1 .mgr 2 cephfs.cephfs.meta 3 cephfs.cephfs.data 4 .nfs 5 cephfs.newfs.meta 6 cephfs.newfs.data [root@magna021 ~]# ceph osd pool mksnap cephfs.newfs.meta snap1 created pool cephfs.newfs.meta snap snap1 [root@magna021 ~]# ceph osd pool mksnap cephfs.newfs.data snap2 created pool cephfs.newfs.data snap snap2 [root@magna021 ~]# rados lssnap -p cephfs.newfs.meta 1 snap1 2023.04.24 19:04:39 1 snaps [root@magna021 ~]# rados lssnap -p cephfs.newfs.data 1 snap2 2023.04.24 19:04:47 1 snaps --- Additional comment from Venky Shankar on 2023-04-27 11:01:14 IST --- Milind, please clone one for 5.3.
*** Bug 2222047 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.3 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:4760