Bug 2034309

Summary: [cee/sd][ceph-volume]ceph-volume fails to zap the lvm device(advanced configuration)
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Geo Jose <gjose>
Component: Ceph-VolumeAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED ERRATA QA Contact: Ameena Suhani S H <amsyedha>
Severity: medium Docs Contact: Akash Raj <akraj>
Priority: unspecified    
Version: 5.0CC: adking, akraj, ceph-eng-bugs, gabrioux, jeremy.coulombe, mmuench, msaini, pdhiran, tserlin, vdas, vereddy
Target Milestone: ---   
Target Release: 5.2   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-16.2.8-18.el8cp Doc Type: Enhancement
Doc Text:
.Users need not manually wipe devices prior to redeploying OSDs Previously, users were forced to manually wipe devices prior to redeploying OSDs. With this release, post zapping, physical volumes on devices are removed when no volume groups or logical volumes are remaining, thereby users are not forced to manually wipe devices anymore prior to redeploying OSDs.
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-08-09 17:36:48 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2093017, 2093788    
Bug Blocks: 2102272    

Description Geo Jose 2021-12-20 16:16:57 UTC
Description of problem:
 - ceph-volume fails to zap the lvm device(advanced configuration)
 - While executing "ceph-volume lvm zap --destroy" against lvm device, executing dd on the lvm device. But after that, it tries to remove vg and lv. 
---
[ceph: root@m2 /]# ceph-volume lvm zap --destroy /dev/vg_mpatha/lv_mpatha
--> Zapping: /dev/vg_mpatha/lv_mpatha
Running command: /usr/bin/dd if=/dev/zero of=/dev/vg_mpatha/lv_mpatha bs=1M count=10 conv=fsync
 stderr: 10+0 records in
10+0 records out
 stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.0631291 s, 166 MB/s
--> Only 1 LV left in VG, will proceed to destroy volume group vg_mpatha
Running command: /usr/sbin/vgremove -v -f vg_mpatha
 stderr: Removing vg_mpatha-lv_mpatha (253:3)
 stderr: Archiving volume group "vg_mpatha" metadata (seqno 3).
  Releasing logical volume "lv_mpatha"
 stdout: Logical volume "lv_mpatha" successfully removed
 stderr: Creating volume group backup "/etc/lvm/backup/vg_mpatha" (seqno 4).
 stderr: Removing physical volume "/dev/mapper/mpatha" from volume group "vg_mpatha"
 stdout: Volume group "vg_mpatha" successfully removed
--> Zapping successful for: <LV: /dev/vg_mpatha/lv_mpatha>
---

Version-Release number of selected component (if applicable):
 RHCS 5.x

How/steps to reproduce:
 - Create lvm and deploy the osd using "ceph orch daemon add osd hostname:vg/lv"
 - Remove the osd.
 - zap the lvm device

Actual results:
 - dd on lvm device
 - lv is removing. 
 - vg is removing.
 - Disk header and lvm label remains persists.
 

Expected results:
 - This should not remove the vg. (the vg might contain other lv since this is advanced lvm scenario)
 - ceph-volume should handle the command "ceph-volume lvm zap --destroy" properly.

Comment 19 errata-xmlrpc 2022-08-09 17:36:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5997