Description of problem: ------------------------- When attempting to create a CephFS volume using the ceph fs volume create command with non-existent metadata or data pools, Ceph returns an expected ENOENT error. However, previously created pools with the correct names are removed from the cluster. Version-Release number of selected component (if applicable): ------------------------------------------------------------- ceph version 19.2.1-120.el9cp (9d9d735fbda3c9cca21e066e3d8238ee9520d682) squid (stable) How reproducible: ---------------- Always Steps to Reproduce: ------------------- * Create OSD pools: ceph osd pool create cephfs_data ceph osd pool create cephfs_metadata * Verify pools exist: ceph osd lspools * Attempt to create a CephFS volume with a typo in pool name: ceph fs volume create cephfs_manual --meta-pool cephfs_metadata1 --data-pool cephfs_data * Check pools again: ceph osd lspools Actual results: --------------- The ceph fs volume create command fails with ENOENT. Previously created pools (cephfs_data, cephfs_metadata) are deleted, even though they were not mentioned incorrectly in the command. Expected results: ----------------- Command should fail with ENOENT as the pool cephfs_metadata1 does not exist. Existing pools (cephfs_data, cephfs_metadata) should remain intact. Additional info: ------------------ This issue was caught during QE validation of manual CephFS volume creation while passing the pool names - https://bugzilla.redhat.com/show_bug.cgi?id=2355686 Output : -------- [root@cali027 ~]# ceph osd pool create cephfs_data pool 'cephfs_data' created [root@cali027 ~]# ceph osd pool create cephfs_metadata pool 'cephfs_metadata' created [root@cali027 ~]# ceph fs volume create cephfs_manual --meta-pool cephfs_metadata --data-pool cephfs_data1 Error ENOENT: pool 'cephfs_data1' does not exist [root@cali027 ~]# ceph fs volume create cephfs_manual --meta-pool cephfs_metadata --data-pool cephfs_data Error ENOENT: pool 'cephfs_metadata' does not exist [root@cali027 ~]# ceph osd lspools 1 .mgr 2 cephfs.cephfs.meta 3 cephfs.cephfs.data [root@cali027 ~]# ceph osd pool create cephfs_data pool 'cephfs_data' created [root@cali027 ~]# ceph osd pool create cephfs_metadata pool 'cephfs_metadata' created [root@cali027 ~]# ceph osd lspools 1 .mgr 2 cephfs.cephfs.meta 3 cephfs.cephfs.data 18 cephfs_data 19 cephfs_metadata [root@cali027 ~]# ceph fs volume create cephfs_manual --meta-pool cephfs_metadata1 --data-pool cephfs_data1 Error ENOENT: pool 'cephfs_metadata1' does not exist [root@cali027 ~]# ceph fs volume create cephfs_manual --meta-pool cephfs_metadata1 --data-pool cephfs_data Error ENOENT: pool 'cephfs_metadata1' does not exist [root@cali027 ~]# ceph fs volume create cephfs_manual --meta-pool cephfs_metadata --data-pool cephfs_data Error ENOENT: pool 'cephfs_data' does not exist [root@cali027 ~]# ceph osd lspools 1 .mgr 2 cephfs.cephfs.meta 3 cephfs.cephfs.data [root@cali027 ~]# ceph osd pool create cephfs_data pool 'cephfs_data' created [root@cali027 ~]# ceph osd pool create cephfs_metadata pool 'cephfs_metadata' created [root@cali027 ~]# ceph osd lspools 1 .mgr 2 cephfs.cephfs.meta 3 cephfs.cephfs.data 20 cephfs_data 21 cephfs_metadata [root@cali027 ~]# ceph fs volume create cephfs_manual --meta-pool cephfs_metadata1 --data-pool cephfs_data Error ENOENT: pool 'cephfs_metadata1' does not exist [root@cali027 ~]# ceph osd lspools 1 .mgr 2 cephfs.cephfs.meta 3 cephfs.cephfs.data 21 cephfs_metadata [root@cali027 ~]# ceph fs volume create cephfs_manual --meta-pool cephfs_metadata1 --data-pool cephfs_data1 Error ENOENT: pool 'cephfs_metadata1' does not exist [root@cali027 ~]# ceph osd lspools 1 .mgr 2 cephfs.cephfs.meta 3 cephfs.cephfs.data 21 cephfs_metadata [root@cali027 ~]# ceph fs volume create cephfs_manual --meta-pool cephfs_metadata --data-pool cephfs_data1 Error ENOENT: pool 'cephfs_data1' does not exist [root@cali027 ~]# ceph osd lspools 1 .mgr 2 cephfs.cephfs.meta 3 cephfs.cephfs.data [root@cali027 ~]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2025:9775