Bug 2325383 - [ceph-volume] fails to zap partitioned disk
Summary: [ceph-volume] fails to zap partitioned disk
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Volume
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 8.1
Assignee: Guillaume Abrioux
QA Contact: Aditya Ramteke
URL:
Whiteboard:
: 2352542 (view as bug list)
Depends On:
Blocks: 2362649
TreeView+ depends on / blocked
 
Reported: 2024-11-12 05:42 UTC by Santosh Pillai
Modified: 2025-06-26 12:19 UTC (History)
7 users (show)

Fixed In Version: ceph-19.2.1-35.el9cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 2362649 (view as bug list)
Environment:
Last Closed: 2025-06-26 12:18:54 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 62177 0 None Merged squid: ceph-volume: allow zapping partitions on multipath devices 2025-03-14 11:00:07 UTC
Red Hat Issue Tracker RHCEPH-10215 0 None None None 2024-11-12 05:44:16 UTC
Red Hat Product Errata RHSA-2025:9775 0 None None None 2025-06-26 12:19:01 UTC

Description Santosh Pillai 2024-11-12 05:42:45 UTC
Description of problem:

"ceph-volume lvm zap /dev/vdc1 --destroy" command fails on partitioned disk


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.Bring OSD down
2.Destroy the OSD.
3.Zap the OSD disk using ceph-volume command. 

Actual results:

[root@rook-ceph-osd-0-d6999457f-9msnv rook-ceph]# ceph-volume lvm zap /dev/vdc1 --destroy
--> Zapping: /dev/vdc1
Running command: /usr/bin/dd if=/dev/zero of=/dev/vdc1 bs=1M count=10 conv=fsync
--> Destroying partition since --destroy was used: /dev/vdc1
Running command: /usr/sbin/parted /dev/vdc --script -- rm 1
 stderr: Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, device-mapper library will manage device nodes in device directory.
 stderr: Error
 stderr: :
 stderr: Partition(s) 1 on /dev/vdc have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use.  As a result, the old partition(s) will remain in use.  You should reboot now before making further changes.
 stderr:
Traceback (most recent call last):
  File "/usr/sbin/ceph-volume", line 33, in <module>
    sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())
  File "/usr/lib/python3.9/site-packages/ceph_volume/main.py", line 41, in __init__
    self.main(self.argv)
  File "/usr/lib/python3.9/site-packages/ceph_volume/decorators.py", line 59, in newfunc
    return f(*a, **kw)
  File "/usr/lib/python3.9/site-packages/ceph_volume/main.py", line 153, in main
    terminal.dispatch(self.mapper, subcommand_args)
  File "/usr/lib/python3.9/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py", line 46, in main
    terminal.dispatch(self.mapper, self.argv)
  File "/usr/lib/python3.9/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/zap.py", line 409, in main
    self.zap()
  File "/usr/lib/python3.9/site-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/zap.py", line 283, in zap
    self.zap_partition(device)
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/zap.py", line 225, in zap_partition
    disk.remove_partition(device)
  File "/usr/lib/python3.9/site-packages/ceph_volume/util/disk.py", line 152, in remove_partition
    process.run(
  File "/usr/lib/python3.9/site-packages/ceph_volume/process.py", line 147, in run
    raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 1


Expected results: Partitioned disks should be cleaned up without any issues. 


Additional info:
- issue is observed only with partitioned disks. 
- issues is also observed when there is only a single partition
- issue is not seen if `--destroy` arg is removed. But with that, the disk is not cleaned completely and still has `ceph-bluestore` filesystem on it. So its not a workaround.

Comment 11 errata-xmlrpc 2025-06-26 12:18:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:9775


Note You need to log in before you can comment on or make changes to this bug.