Bug 2034309 - [cee/sd][ceph-volume]ceph-volume fails to zap the lvm device(advanced configuration)
Summary: [cee/sd][ceph-volume]ceph-volume fails to zap the lvm device(advanced configu...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Volume
Version: 5.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 5.2
Assignee: Guillaume Abrioux
QA Contact: Ameena Suhani S H
Akash Raj
URL:
Whiteboard:
Depends On: 2093017 2093788
Blocks: 2102272
TreeView+ depends on / blocked
 
Reported: 2021-12-20 16:16 UTC by Geo Jose
Modified: 2023-10-05 10:38 UTC (History)
11 users (show)

Fixed In Version: ceph-16.2.8-18.el8cp
Doc Type: Enhancement
Doc Text:
.Users need not manually wipe devices prior to redeploying OSDs Previously, users were forced to manually wipe devices prior to redeploying OSDs. With this release, post zapping, physical volumes on devices are removed when no volume groups or logical volumes are remaining, thereby users are not forced to manually wipe devices anymore prior to redeploying OSDs.
Clone Of:
Environment:
Last Closed: 2022-08-09 17:36:48 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-2825 0 None None None 2021-12-20 16:18:20 UTC
Red Hat Product Errata RHSA-2022:5997 0 None None None 2022-08-09 17:37:19 UTC

Description Geo Jose 2021-12-20 16:16:57 UTC
Description of problem:
 - ceph-volume fails to zap the lvm device(advanced configuration)
 - While executing "ceph-volume lvm zap --destroy" against lvm device, executing dd on the lvm device. But after that, it tries to remove vg and lv. 
---
[ceph: root@m2 /]# ceph-volume lvm zap --destroy /dev/vg_mpatha/lv_mpatha
--> Zapping: /dev/vg_mpatha/lv_mpatha
Running command: /usr/bin/dd if=/dev/zero of=/dev/vg_mpatha/lv_mpatha bs=1M count=10 conv=fsync
 stderr: 10+0 records in
10+0 records out
 stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.0631291 s, 166 MB/s
--> Only 1 LV left in VG, will proceed to destroy volume group vg_mpatha
Running command: /usr/sbin/vgremove -v -f vg_mpatha
 stderr: Removing vg_mpatha-lv_mpatha (253:3)
 stderr: Archiving volume group "vg_mpatha" metadata (seqno 3).
  Releasing logical volume "lv_mpatha"
 stdout: Logical volume "lv_mpatha" successfully removed
 stderr: Creating volume group backup "/etc/lvm/backup/vg_mpatha" (seqno 4).
 stderr: Removing physical volume "/dev/mapper/mpatha" from volume group "vg_mpatha"
 stdout: Volume group "vg_mpatha" successfully removed
--> Zapping successful for: <LV: /dev/vg_mpatha/lv_mpatha>
---

Version-Release number of selected component (if applicable):
 RHCS 5.x

How/steps to reproduce:
 - Create lvm and deploy the osd using "ceph orch daemon add osd hostname:vg/lv"
 - Remove the osd.
 - zap the lvm device

Actual results:
 - dd on lvm device
 - lv is removing. 
 - vg is removing.
 - Disk header and lvm label remains persists.
 

Expected results:
 - This should not remove the vg. (the vg might contain other lv since this is advanced lvm scenario)
 - ceph-volume should handle the command "ceph-volume lvm zap --destroy" properly.

Comment 19 errata-xmlrpc 2022-08-09 17:36:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5997


Note You need to log in before you can comment on or make changes to this bug.