Bug 2325383

Summary: [ceph-volume] fails to zap partitioned disk
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Santosh Pillai <sapillai>
Component: Ceph-VolumeAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED ERRATA QA Contact: Aditya Ramteke <aramteke>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 7.0CC: akandath, aramteke, ceph-eng-bugs, cephqe-warriors, msaini, prsurve, tserlin
Target Milestone: ---   
Target Release: 8.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-19.2.1-35.el9cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of:
: 2362649 (view as bug list) Environment:
Last Closed: 2025-06-26 12:18:54 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2362649    

Description Santosh Pillai 2024-11-12 05:42:45 UTC
Description of problem:

"ceph-volume lvm zap /dev/vdc1 --destroy" command fails on partitioned disk


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.Bring OSD down
2.Destroy the OSD.
3.Zap the OSD disk using ceph-volume command. 

Actual results:

[root@rook-ceph-osd-0-d6999457f-9msnv rook-ceph]# ceph-volume lvm zap /dev/vdc1 --destroy
--> Zapping: /dev/vdc1
Running command: /usr/bin/dd if=/dev/zero of=/dev/vdc1 bs=1M count=10 conv=fsync
--> Destroying partition since --destroy was used: /dev/vdc1
Running command: /usr/sbin/parted /dev/vdc --script -- rm 1
 stderr: Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, device-mapper library will manage device nodes in device directory.
 stderr: Error
 stderr: :
 stderr: Partition(s) 1 on /dev/vdc have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use.  As a result, the old partition(s) will remain in use.  You should reboot now before making further changes.
 stderr:
Traceback (most recent call last):
  File "/usr/sbin/ceph-volume", line 33, in <module>
    sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())
  File "/usr/lib/python3.9/site-packages/ceph_volume/main.py", line 41, in __init__
    self.main(self.argv)
  File "/usr/lib/python3.9/site-packages/ceph_volume/decorators.py", line 59, in newfunc
    return f(*a, **kw)
  File "/usr/lib/python3.9/site-packages/ceph_volume/main.py", line 153, in main
    terminal.dispatch(self.mapper, subcommand_args)
  File "/usr/lib/python3.9/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py", line 46, in main
    terminal.dispatch(self.mapper, self.argv)
  File "/usr/lib/python3.9/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/zap.py", line 409, in main
    self.zap()
  File "/usr/lib/python3.9/site-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/zap.py", line 283, in zap
    self.zap_partition(device)
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/zap.py", line 225, in zap_partition
    disk.remove_partition(device)
  File "/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py", line 152, in remove_partition
    process.run(
  File "/usr/lib/python3.9/site-packages/ceph_volume/process.py", line 147, in run
    raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 1


Expected results: Partitioned disks should be cleaned up without any issues. 


Additional info:
- issue is observed only with partitioned disks. 
- issues is also observed when there is only a single partition
- issue is not seen if `--destroy` arg is removed. But with that, the disk is not cleaned completely and still has `ceph-bluestore` filesystem on it. So its not a workaround.

Comment 11 errata-xmlrpc 2025-06-26 12:18:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:9775