Bug 1851943

Summary: pybind/mgr/volumes: volume deletion not always removes the associated osd pools
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Kotresh HR <khiremat>
Component: CephFSAssignee: Kotresh HR <khiremat>
Status: CLOSED ERRATA QA Contact: subhash <vpoliset>
Severity: low Docs Contact:
Priority: low    
Version: 4.1CC: ceph-eng-bugs, pdonnell, sweil, tserlin
Target Milestone: z2   
Target Release: 4.1   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: ceph-14.2.8-94.el8cp, ceph-14.2.8-94.el7cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-09-30 17:26:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Kotresh HR 2020-06-29 13:49:09 UTC
Description of problem:
The volume deletion doesn't remove the associated osd pools. The pools are removed
only if the volume is created using mgr plugin and not if created with custom osd
pools. This is because mgr plugin generates pool names with specific
pattern. Both create and delete volume relies on it. The delete volume should
not rely on pattern. It should discover the osd pools associated with volume
before deletion.

##Create a vstart cluster

#env MDS=3 ../src/vstart.sh -d -b -n --without-dashboard
#cd build
#bin/ceph osd pool create cephfs_data1 8
#bin/ceph fs add_data_pool a cephfs_data1
#bin/ceph fs volume rm a --yes-i-really-mean-it
#bin/ceph osd pool ls
device_health_metrics
cephfs_data1

Please look into upstream tracker for more information.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Patrick Donnelly 2020-08-18 21:58:25 UTC
changes lost in recent rebase; needs repushed

Comment 9 errata-xmlrpc 2020-09-30 17:26:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 4.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4144