Bug 1946478 - [GSS][ceph-volume]"ceph-volume lvm batch" shows "% of device" as 0% for DB device
Summary: [GSS][ceph-volume]"ceph-volume lvm batch" shows "% of device" as 0% for DB de...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Volume
Version: 4.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 5.1
Assignee: Andrew Schoen
QA Contact: Rahul Lepakshi
Ranjini M N
URL:
Whiteboard:
Depends On:
Blocks: 2031073
TreeView+ depends on / blocked
 
Reported: 2021-04-06 08:35 UTC by Janmejay Singh
Modified: 2025-08-08 11:35 UTC (History)
13 users (show)

Fixed In Version: ceph-16.2.6-1.el8cp ceph-16.2.6-1.el7cp
Doc Type: Bug Fix
Doc Text:
.The `ceph-volume lvm batch --report` now displays the correct value of `% of device` parameter Previously, users would not get correct information on the amount of space available on the device for `--db-devices` as `ceph-volume` would incorrectly calculate the `% of device` value for the DB devices when running the `ceph-volume lvm batch --report` command. With this release, the `ceph-volume lvm batch --report` command correctly shows the space that is used on each device given in `--db-devices`.
Clone Of:
Environment:
Last Closed: 2022-04-04 10:20:36 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 44749 0 None None None 2021-04-27 15:18:50 UTC
Github ceph ceph pull 34740 0 None closed ceph-volume: major batch refactor 2021-04-27 15:18:50 UTC
Github ceph ceph pull 41506 0 None closed ceph-volume: fix batch report and respect ceph.conf config values 2021-06-15 14:46:03 UTC
Red Hat Product Errata RHSA-2022:1174 0 None None None 2022-04-04 10:21:00 UTC

Description Janmejay Singh 2021-04-06 08:35:13 UTC
Description of problem:
Creating OSD with "ceph-volume lvm batch" show "% of device" as 0% for DB device if that device is already serving as DB for other OSD's.


Version-Release number of selected component (if applicable):
RHCS 4.2

How reproducible:
1: Create an non-collocated OSD.
#ceph-volume lvm batch --bluestore  /dev/sda /dev/nvme0n1
2: Now create another OSD using same DB device to utilize remaining space on DB device.
#ceph-volume lvm batch --bluestore  /dev/vdc /dev/nvme0n1
3: If we use DB '--deb-devices' flag at this point, it ignores the DB device completely and create a collocated OSD.
  #ceph-volume lvm batch --bluestore  /dev/vdc --db-devices /dev/nvme0n1 --report

Comment 18 Ken Dreyer (Red Hat) 2021-10-05 21:40:11 UTC
https://tracker.ceph.com/issues/51107 is resolved in v14.2.22 upstream, and we've rebased to that version, so I'm moving this to ON_QA.

Comment 19 Ken Dreyer (Red Hat) 2021-10-05 21:44:26 UTC
(Er, that previous comment was for RHCS 4.3). For RHCS 5.1:

https://tracker.ceph.com/issues/51108 is resolved in v16.2.5 upstream, and we've rebased ceph-5.1-rhel-patches to v16.2.6, so I'm moving this to ON_QA.

Comment 27 errata-xmlrpc 2022-04-04 10:20:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1174


Note You need to log in before you can comment on or make changes to this bug.