Bug 1873010 - [ceph-ansible] shrink-osd.yml doesn't clean lvm on shrinked OSD disk
Summary: [ceph-ansible] shrink-osd.yml doesn't clean lvm on shrinked OSD disk
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 4.1
Hardware: Unspecified
OS: Linux
medium
high
Target Milestone: z2
: 4.1
Assignee: Guillaume Abrioux
QA Contact: Ameena Suhani S H
Aron Gunn
URL:
Whiteboard:
Depends On:
Blocks: 1816167
TreeView+ depends on / blocked
 
Reported: 2020-08-27 07:02 UTC by Meiyan Zheng
Modified: 2023-12-15 19:02 UTC (History)
13 users (show)

Fixed In Version: ceph-ansible-4.0.26-1.el8cp, ceph-ansible-4.0.26-1.el7cp
Doc Type: Bug Fix
Doc Text:
.The Ceph Ansible `shrink-osd.yml` playbook does not clean the Ceph OSD properly The `zap` action done by the `ceph_volume` module does not handle the `osd_fsid` parameter. This caused the Ceph OSD to be improperly zapped by leaving logical volumes on the underlying devices. With this release, the `zap` action properly handles the `osd_fsid` parameter, and the Ceph OSD can be cleaned properly after shrinking.
Clone Of:
Environment:
Last Closed: 2020-09-30 17:26:56 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 5528 0 None closed [backport stable-4.0] ceph_volume: fix regression 2021-02-09 06:19:35 UTC
Red Hat Product Errata RHBA-2020:4144 0 None None None 2020-09-30 17:27:30 UTC

Comment 9 errata-xmlrpc 2020-09-30 17:26:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 4.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4144


Note You need to log in before you can comment on or make changes to this bug.