Bug 2252878 - [ceph-volume] ceph orch redeploys OSD with wrong dedicated DB size for non-collocated scenario
Summary: [ceph-volume] ceph orch redeploys OSD with wrong dedicated DB size for non-co...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 6.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: 6.1z3
Assignee: Guillaume Abrioux
QA Contact: Aditya Ramteke
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-12-05 04:15 UTC by Aditya Ramteke
Modified: 2023-12-12 13:56 UTC (History)
5 users (show)

Fixed In Version: ceph-17.2.6-166.el9cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-12-12 13:56:18 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 54786 0 None open python-common: fix osdspec_affinity check 2023-12-05 17:05:20 UTC
Red Hat Issue Tracker RHCEPH-8002 0 None None None 2023-12-05 04:16:08 UTC
Red Hat Product Errata RHSA-2023:7740 0 None None None 2023-12-12 13:56:21 UTC

Description Aditya Ramteke 2023-12-05 04:15:13 UTC
Description of problem:
Redeployment of non-collocated OSDs results in wrongly configured DB size.

Version-Release number of selected component (if applicable):
# ceph version
ceph version 17.2.6-164.el9cp (aab6f2daa7fb8660e32a2dbc099e28c67b7014bd) quincy (stable)

How reproducible:
3/5

Steps to Reproduce:
1. Create 4 OSDs with 4 disks with dedicated DB devices on 2 disks having DBs for 2 OSD disks each.
2. make sure that OSDs are in managed state (default)
3. Remove the OSDs via `ceph orch osd rm --zap OSD.IDs
4. OSDs gets recreated with Wrong DB size

Actual results:
Wrong DB size gets assigned to OSDs DB

Expected results:
Dbs should get created corectly with correct size.

Comment 5 Guillaume Abrioux 2023-12-05 16:51:37 UTC
it turns out this is a cephadm bug when no `service_id` was provided to the service spec.

Adam and I are working on a patch in order to address this issue.

Comment 10 errata-xmlrpc 2023-12-12 13:56:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:7740


Note You need to log in before you can comment on or make changes to this bug.