Bug 1851943 - pybind/mgr/volumes: volume deletion not always removes the associated osd pools
Summary: pybind/mgr/volumes: volume deletion not always removes the associated osd pools
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 4.1
Hardware: All
OS: All
low
low
Target Milestone: z2
: 4.1
Assignee: Kotresh HR
QA Contact: subhash
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-29 13:49 UTC by Kotresh HR
Modified: 2020-09-30 17:26 UTC (History)
4 users (show)

Fixed In Version: ceph-14.2.8-94.el8cp, ceph-14.2.8-94.el7cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-30 17:26:19 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 46235 0 None None None 2020-06-29 15:47:39 UTC
Red Hat Product Errata RHBA-2020:4144 0 None None None 2020-09-30 17:26:44 UTC

Description Kotresh HR 2020-06-29 13:49:09 UTC
Description of problem:
The volume deletion doesn't remove the associated osd pools. The pools are removed
only if the volume is created using mgr plugin and not if created with custom osd
pools. This is because mgr plugin generates pool names with specific
pattern. Both create and delete volume relies on it. The delete volume should
not rely on pattern. It should discover the osd pools associated with volume
before deletion.

##Create a vstart cluster

#env MDS=3 ../src/vstart.sh -d -b -n --without-dashboard
#cd build
#bin/ceph osd pool create cephfs_data1 8
#bin/ceph fs add_data_pool a cephfs_data1
#bin/ceph fs volume rm a --yes-i-really-mean-it
#bin/ceph osd pool ls
device_health_metrics
cephfs_data1

Please look into upstream tracker for more information.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Patrick Donnelly 2020-08-18 21:58:25 UTC
changes lost in recent rebase; needs repushed

Comment 9 errata-xmlrpc 2020-09-30 17:26:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 4.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4144


Note You need to log in before you can comment on or make changes to this bug.