Description of problem: "ceph-volume lvm zap /dev/vdc1 --destroy" command fails on partitioned disk Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1.Bring OSD down 2.Destroy the OSD. 3.Zap the OSD disk using ceph-volume command. Actual results: [root@rook-ceph-osd-0-d6999457f-9msnv rook-ceph]# ceph-volume lvm zap /dev/vdc1 --destroy --> Zapping: /dev/vdc1 Running command: /usr/bin/dd if=/dev/zero of=/dev/vdc1 bs=1M count=10 conv=fsync --> Destroying partition since --destroy was used: /dev/vdc1 Running command: /usr/sbin/parted /dev/vdc --script -- rm 1 stderr: Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, device-mapper library will manage device nodes in device directory. stderr: Error stderr: : stderr: Partition(s) 1 on /dev/vdc have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes. stderr: Traceback (most recent call last): File "/usr/sbin/ceph-volume", line 33, in <module> sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()) File "/usr/lib/python3.9/site-packages/ceph_volume/main.py", line 41, in __init__ self.main(self.argv) File "/usr/lib/python3.9/site-packages/ceph_volume/decorators.py", line 59, in newfunc return f(*a, **kw) File "/usr/lib/python3.9/site-packages/ceph_volume/main.py", line 153, in main terminal.dispatch(self.mapper, subcommand_args) File "/usr/lib/python3.9/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py", line 46, in main terminal.dispatch(self.mapper, self.argv) File "/usr/lib/python3.9/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/zap.py", line 409, in main self.zap() File "/usr/lib/python3.9/site-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/zap.py", line 283, in zap self.zap_partition(device) File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/zap.py", line 225, in zap_partition disk.remove_partition(device) File "/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py", line 152, in remove_partition process.run( File "/usr/lib/python3.9/site-packages/ceph_volume/process.py", line 147, in run raise RuntimeError(msg) RuntimeError: command returned non-zero exit status: 1 Expected results: Partitioned disks should be cleaned up without any issues. Additional info: - issue is observed only with partitioned disks. - issues is also observed when there is only a single partition - issue is not seen if `--destroy` arg is removed. But with that, the disk is not cleaned completely and still has `ceph-bluestore` filesystem on it. So its not a workaround.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2025:9775