Bug 2185887 - [cee/sd][cephadm][testfix] Zapping OSDs on Hosts deployed with Ceph RHCS 4.2z4 or before does not work after upgrade to RHCS 5.3z2 testfix
Summary: [cee/sd][cephadm][testfix] Zapping OSDs on Hosts deployed with Ceph RHCS 4.2z...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Volume
Version: 5.3
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: 5.3z3
Assignee: Guillaume Abrioux
QA Contact: Vinayak Papnoi
lysanche
URL:
Whiteboard:
Depends On:
Blocks: 2190412
TreeView+ depends on / blocked
 
Reported: 2023-04-11 14:00 UTC by Milind
Modified: 2023-06-15 10:01 UTC (History)
8 users (show)

Fixed In Version: ceph-16.2.10-170.el8cp
Doc Type: Bug Fix
Doc Text:
.Zapping OSDs deployed prior to {storage-product} 4.3 works as expected Previously, in old deployments, a `lv_uuid` matched more than one physical volume (PV). As a result, users could not zap OSDs with dedicated DB devices. With this fix, `ceph-volume` looks for all PVs instead of assuming only one PM in the corresponding volume group, and zapping OSDs deployed prior to {storage-product} 4.3 works as expected.
Clone Of:
: 2190412 (view as bug list)
Environment:
Last Closed: 2023-05-23 00:19:10 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 50745 0 None Merged ceph-volume: quick fix in zap.py 2023-05-22 09:17:37 UTC
Red Hat Issue Tracker RHCEPH-6416 0 None None None 2023-04-11 14:01:34 UTC
Red Hat Product Errata RHBA-2023:3259 0 None None None 2023-05-23 00:19:46 UTC

Comment 4 Milind 2023-04-13 12:52:17 UTC
Hi Adam,

Customer tested it from 4.2z4 and the 4.3z1 (latest), so the change in behavior is between 4-69 and 4-160 or 4-160 and 4-195 images. The cephadm logs are only showing the OSD is removed and all and nothing else, so to get clear picture, I am testing the customer scenario on my LAB environment as 5.3.2 is GA's already and since the testfix image is also 5.3.2 based.

Will update you till tomorrow about how thing went.

Regards
Milind Verma

Comment 18 errata-xmlrpc 2023-05-23 00:19:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.3 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:3259


Note You need to log in before you can comment on or make changes to this bug.