Bug 2252878

Summary: [ceph-volume] ceph orch redeploys OSD with wrong dedicated DB size for non-collocated scenario
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Aditya Ramteke <aramteke>
Component: CephadmAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED ERRATA QA Contact: Aditya Ramteke <aramteke>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 6.1CC: ceph-eng-bugs, cephqe-warriors, gabrioux, tserlin, vdas
Target Milestone: ---Keywords: Regression
Target Release: 6.1z3   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-17.2.6-166.el9cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-12-12 13:56:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Aditya Ramteke 2023-12-05 04:15:13 UTC
Description of problem:
Redeployment of non-collocated OSDs results in wrongly configured DB size.

Version-Release number of selected component (if applicable):
# ceph version
ceph version 17.2.6-164.el9cp (aab6f2daa7fb8660e32a2dbc099e28c67b7014bd) quincy (stable)

How reproducible:
3/5

Steps to Reproduce:
1. Create 4 OSDs with 4 disks with dedicated DB devices on 2 disks having DBs for 2 OSD disks each.
2. make sure that OSDs are in managed state (default)
3. Remove the OSDs via `ceph orch osd rm --zap OSD.IDs
4. OSDs gets recreated with Wrong DB size

Actual results:
Wrong DB size gets assigned to OSDs DB

Expected results:
Dbs should get created corectly with correct size.

Comment 5 Guillaume Abrioux 2023-12-05 16:51:37 UTC
it turns out this is a cephadm bug when no `service_id` was provided to the service spec.

Adam and I are working on a patch in order to address this issue.

Comment 10 errata-xmlrpc 2023-12-12 13:56:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:7740