Description of problem: The volume deletion doesn't remove the associated osd pools. The pools are removed only if the volume is created using mgr plugin and not if created with custom osd pools. This is because mgr plugin generates pool names with specific pattern. Both create and delete volume relies on it. The delete volume should not rely on pattern. It should discover the osd pools associated with volume before deletion. ##Create a vstart cluster #env MDS=3 ../src/vstart.sh -d -b -n --without-dashboard #cd build #bin/ceph osd pool create cephfs_data1 8 #bin/ceph fs add_data_pool a cephfs_data1 #bin/ceph fs volume rm a --yes-i-really-mean-it #bin/ceph osd pool ls device_health_metrics cephfs_data1 Please look into upstream tracker for more information. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
changes lost in recent rebase; needs repushed
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 4.1 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4144