Bug 2185588 - [CEE/sd][Ceph-volume] wrong block_db_size computed when adding OSD
Summary: [CEE/sd][Ceph-volume] wrong block_db_size computed when adding OSD
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Volume
Version: 5.3
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: ---
: 6.1
Assignee: Guillaume Abrioux
QA Contact: Manisha Saini
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2186515 2186787 2192813
TreeView+ depends on / blocked
 
Reported: 2023-04-10 12:14 UTC by Tridibesh Chakraborty
Modified: 2024-11-28 12:38 UTC (History)
10 users (show)

Fixed In Version: ceph-17.2.6-20.el9cp
Doc Type: Bug Fix
Doc Text:
.The correct size is calculated for each database device in `ceph-volume` Previously, as of RHCS 4.3, `ceph-volume` would not make a single VG with all database devices inside, since each database device had its own VG. Due to this, the database size was calculated differently for each LV. With this release, the logic is updated to take into account the new database devices with LVM layout. The correct size is calculated for each database device.
Clone Of:
: 2186515 2186787 (view as bug list)
Environment:
Last Closed: 2023-06-15 09:17:19 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 47413/commits/4252cc44211f0ccebf388374744eaa26b32854d3 0 None None None 2023-04-14 13:36:49 UTC
Github ceph ceph pull 50878 0 None Merged pacific: ceph-volume: fix issue with fast device allocs when there are multiple PVs per VG 2023-04-28 10:17:45 UTC
Red Hat Issue Tracker RHCEPH-6396 0 None None None 2023-04-10 12:15:20 UTC

Description Tridibesh Chakraborty 2023-04-10 12:14:50 UTC
Description of problem:

When creating non-collocated OSDs, ceph-volume is not consider all the db devices to calculate the space and then allocate block_db_size for the OSDs

Version-Release number of selected component (if applicable):
RHCS 5.3z1 / 16.2.10-138.el8cp

How reproducible:
It is happening all the time

Steps to Reproduce:
1. Add couple of new disks for the non-collocated OSD scenario
2. Specify few of the disks as data disk and couple of them as db device in service configuration file without specifying the block_db_size
3. Now apply the specs file
4. You will notice the db devices created with the size as size of single disk/total number of data disks instead of total size of all the db devices divided by number of total data disks

Actual results:

block_db_size is getting calculated smaller than the actual size it should select automatically

Expected results:

ceph-volume should consider all the db devices to calculate the block_db_size


Additional info:

This is already identified and fixed in upstream (verified in 17.2.5) but not yet backported into RHCS 5 / Pacific

https://tracker.ceph.com/issues/56031
https://tracker.ceph.com/issues/56629
https://github.com/ceph/ceph/pull/47292

Comment 27 errata-xmlrpc 2023-06-15 09:17:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:3623


Note You need to log in before you can comment on or make changes to this bug.