Bug 2354475 - [8.1] cephadm rm-cluster fails to cleanup disks
Summary: [8.1] cephadm rm-cluster fails to cleanup disks
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 8.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: 8.1
Assignee: Adam King
QA Contact: Mohit Bisht
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2025-03-24 07:34 UTC by Vinayak Papnoi
Modified: 2025-06-26 12:29 UTC (History)
2 users (show)

Fixed In Version: ceph-19.2.1-121.el9cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2025-06-26 12:29:21 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-10942 0 None None None 2025-03-24 07:35:06 UTC
Red Hat Product Errata RHSA-2025:9775 0 None None None 2025-06-26 12:29:24 UTC

Description Vinayak Papnoi 2025-03-24 07:34:36 UTC
Description of problem:

After execution of cephadm rm-cluster command, when we check if all disks related to ceph have been cleaned it lists the osd disks

2025-03-21 20:32:39,065 - cephci - ceph:1606 - INFO - Execution of lsblk -ln -o name | grep ceph- on 10.0.195.139 took 1.00385 seconds
2025-03-21 20:32:39,066 - cephci - run:854 - ERROR - Failed to clean ceph disks on node '10.0.195.139 -
ceph--15917e11--79d8--4d81--9af2--c6232a09c47f-osd--block--aff91c05--41c2--4cce--b0c6--e15ab7575715
ceph--521e6ffd--e426--4cc9--8e18--08aef1f9e447-osd--block--4d7cd851--0afb--4c48--9511--ae10085981f4
ceph--71546b34--0aaa--46d7--95c1--e0eb0bb976db-osd--block--228c6283--d3c0--4d6d--af52--d1d42249d13a
ceph--53328f7f--e899--4c79--ae0d--f1557b48f7b6-osd--block--e5d00aa2--403c--4bae--8976--c6a4d402b374
ceph--97ebdcc4--47a1--42e5--9247--0058d8f64f29-osd--block--73d0be56--b737--46f8--bddd--b42989d9066d
ceph--39bc05e6--b7ef--4a8d--be07--2f4831c0fd4f-osd--block--32171ad9--e70e--4bde--8558--4e619090019c




Version-Release number of selected component (if applicable):

19.2.1-57

How reproducible:
Always

Steps to Reproduce:
1. Deploy a ceph 8.1 cluster
2. Perform rm-cluster operation along with --zap-osd operation
# cephadm rm-cluster --fsid <fsid> --zap-osds --force
3. Check for any disks used by ceph after rm-cluster

Actual results:

Disks are listed

Expected results:

Disks must not be listed 

Additional info:

Comment 8 errata-xmlrpc 2025-06-26 12:29:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:9775


Note You need to log in before you can comment on or make changes to this bug.