Bug 2359798 - Unexpected Deletion of existing pools upon failed "ceph fs volume create" command with Non-Existing Pools
Summary: Unexpected Deletion of existing pools upon failed "ceph fs volume create" com...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 8.1
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 8.1
Assignee: Rishabh Dave
QA Contact: Hemanth Kumar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2025-04-15 16:58 UTC by Hemanth Kumar
Modified: 2025-06-26 12:30 UTC (History)
4 users (show)

Fixed In Version: ceph-19.2.1-169.el9cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2025-06-26 12:30:33 UTC
Embargoed:
hyelloji: needinfo+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 70945 0 None None None 2025-04-16 09:02:02 UTC
Github ceph ceph pull 62843 0 None open mgr/vol: don't delete user-created pool in "volume create" command 2025-04-16 09:02:02 UTC
Red Hat Issue Tracker RHCEPH-11181 0 None None None 2025-04-15 17:00:12 UTC
Red Hat Product Errata RHSA-2025:9775 0 None None None 2025-06-26 12:30:36 UTC

Description Hemanth Kumar 2025-04-15 16:58:24 UTC
Description of problem:
-------------------------
When attempting to create a CephFS volume using the ceph fs volume create command with non-existent metadata or data pools, Ceph returns an expected ENOENT error.

However, previously created pools with the correct names are removed from the cluster.

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
ceph version 19.2.1-120.el9cp (9d9d735fbda3c9cca21e066e3d8238ee9520d682) squid (stable)

How reproducible:
----------------
Always


Steps to Reproduce:
-------------------
* Create OSD pools:
ceph osd pool create cephfs_data
ceph osd pool create cephfs_metadata

* Verify pools exist:
ceph osd lspools

* Attempt to create a CephFS volume with a typo in pool name:
ceph fs volume create cephfs_manual --meta-pool cephfs_metadata1 --data-pool cephfs_data

* Check pools again:
ceph osd lspools

Actual results:
---------------
The ceph fs volume create command fails with ENOENT.

Previously created pools (cephfs_data, cephfs_metadata) are deleted, even though they were not mentioned incorrectly in the command.

Expected results:
-----------------
Command should fail with ENOENT as the pool cephfs_metadata1 does not exist.

Existing pools (cephfs_data, cephfs_metadata) should remain intact.


Additional info:
------------------
This issue was caught during QE validation of manual CephFS volume creation while passing the pool names - https://bugzilla.redhat.com/show_bug.cgi?id=2355686 

Output : 
--------
[root@cali027 ~]# ceph osd pool create cephfs_data
pool 'cephfs_data' created

[root@cali027 ~]# ceph osd pool create cephfs_metadata
pool 'cephfs_metadata' created

[root@cali027 ~]# ceph fs volume create cephfs_manual --meta-pool cephfs_metadata --data-pool cephfs_data1
Error ENOENT: pool 'cephfs_data1' does not exist

[root@cali027 ~]# ceph fs volume create cephfs_manual --meta-pool cephfs_metadata --data-pool cephfs_data
Error ENOENT: pool 'cephfs_metadata' does not exist

[root@cali027 ~]# ceph osd lspools
1 .mgr
2 cephfs.cephfs.meta
3 cephfs.cephfs.data

[root@cali027 ~]# ceph osd pool create cephfs_data
pool 'cephfs_data' created

[root@cali027 ~]# ceph osd pool create cephfs_metadata
pool 'cephfs_metadata' created

[root@cali027 ~]# ceph osd lspools
1 .mgr
2 cephfs.cephfs.meta
3 cephfs.cephfs.data
18 cephfs_data
19 cephfs_metadata

[root@cali027 ~]# ceph fs volume create cephfs_manual --meta-pool cephfs_metadata1 --data-pool cephfs_data1
Error ENOENT: pool 'cephfs_metadata1' does not exist

[root@cali027 ~]# ceph fs volume create cephfs_manual --meta-pool cephfs_metadata1 --data-pool cephfs_data
Error ENOENT: pool 'cephfs_metadata1' does not exist

[root@cali027 ~]# ceph fs volume create cephfs_manual --meta-pool cephfs_metadata --data-pool cephfs_data
Error ENOENT: pool 'cephfs_data' does not exist

[root@cali027 ~]# ceph osd lspools
1 .mgr
2 cephfs.cephfs.meta
3 cephfs.cephfs.data

[root@cali027 ~]# ceph osd pool create cephfs_data
pool 'cephfs_data' created

[root@cali027 ~]# ceph osd pool create cephfs_metadata
pool 'cephfs_metadata' created

[root@cali027 ~]# ceph osd lspools
1 .mgr
2 cephfs.cephfs.meta
3 cephfs.cephfs.data
20 cephfs_data
21 cephfs_metadata

[root@cali027 ~]# ceph fs volume create cephfs_manual --meta-pool cephfs_metadata1 --data-pool cephfs_data
Error ENOENT: pool 'cephfs_metadata1' does not exist

[root@cali027 ~]# ceph osd lspools
1 .mgr
2 cephfs.cephfs.meta
3 cephfs.cephfs.data
21 cephfs_metadata

[root@cali027 ~]# ceph fs volume create cephfs_manual --meta-pool cephfs_metadata1 --data-pool cephfs_data1
Error ENOENT: pool 'cephfs_metadata1' does not exist

[root@cali027 ~]# ceph osd lspools
1 .mgr
2 cephfs.cephfs.meta
3 cephfs.cephfs.data
21 cephfs_metadata

[root@cali027 ~]# ceph fs volume create cephfs_manual --meta-pool cephfs_metadata --data-pool cephfs_data1
Error ENOENT: pool 'cephfs_data1' does not exist

[root@cali027 ~]# ceph osd lspools
1 .mgr
2 cephfs.cephfs.meta
3 cephfs.cephfs.data
[root@cali027 ~]#

Comment 10 errata-xmlrpc 2025-06-26 12:30:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:9775


Note You need to log in before you can comment on or make changes to this bug.