Bug 1854973 - ceph_volume module prevents ceph-ansible from zapping OSDs by osd_fsid when lvm_volumes is used
Summary: ceph_volume module prevents ceph-ansible from zapping OSDs by osd_fsid when l...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 4.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z2
: 4.1
Assignee: Guillaume Abrioux
QA Contact: Ameena Suhani S H
URL:
Whiteboard:
Depends On:
Blocks: 1733577 1760354
TreeView+ depends on / blocked
 
Reported: 2020-07-08 14:15 UTC by Guillaume Abrioux
Modified: 2020-09-30 17:26 UTC (History)
14 users (show)

Fixed In Version: ceph-ansible-4.0.29-1.el8cp, ceph-ansible-4.0.29-1.el7cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-30 17:26:19 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph/ceph-ansible/commit/f402ab2b87813f0f9c3fba661a52f5afebc19723 0 None None None 2020-10-08 15:00:38 UTC
Github ceph ceph-ansible pull 5528 0 None closed [backport stable-4.0] ceph_volume: fix regression 2021-01-14 09:31:50 UTC
Red Hat Product Errata RHBA-2020:4144 0 None None None 2020-09-30 17:26:44 UTC

Description Guillaume Abrioux 2020-07-08 14:15:20 UTC
Description of problem:
ceph_volume module prevent ceph-ansible from zapping OSDs by osd_fsid when lvm_volumes is used

Version-Release number of selected component (if applicable):
v4.0.25

How reproducible:
100%

Steps to Reproduce:
1/ deploy a cluster using `lvm_volumes` and use LVs/VGs
2/ try using shrink-osd.yml to shrink OSDs

Actual results:
The playbook runs fine, cluster seems to be OK, but the device won't be physically zapped because the ceph_volume.py module skips it :

"stdout: Skipped, nothing to zap"


Expected results:
OSD is shrinked and corresponding devices are well zapped

Comment 10 errata-xmlrpc 2020-09-30 17:26:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 4.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4144


Note You need to log in before you can comment on or make changes to this bug.