Bug 2082414 - [cee/sd][cephadm] ceph-volume: passed block_db devices: 0 physical, 1 LVM --> ZeroDivisionError: integer division or modulo by zero
Summary: [cee/sd][cephadm] ceph-volume: passed block_db devices: 0 physical, 1 LVM -->...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Volume
Version: 5.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.1z1
Assignee: Guillaume Abrioux
QA Contact: Ameena Suhani S H
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-05-06 05:19 UTC by Ameena Suhani S H
Modified: 2022-05-18 10:38 UTC (History)
5 users (show)

Fixed In Version: ceph-16.2.7-110.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-05-18 10:38:17 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-4246 0 None None None 2022-05-06 05:19:26 UTC
Red Hat Product Errata RHBA-2022:4622 0 None None None 2022-05-18 10:38:20 UTC

Description Ameena Suhani S H 2022-05-06 05:19:07 UTC
Description of problem:
Cloned https://bugzilla.redhat.com/show_bug.cgi?id=2029695 to test it for 5.1z1

Description of problem:
 - For DB/WAL, pre-created LVM layout is not supported by cephadm.

Version-Release number of selected component (if applicable):
 - RHCS 5.


Steps to Reproduce:
1. Install RHCS 4 with advanced lvm configuration(Pre-created logical volumes).
2. Upgrade from ceph 4 to ceph5.
3. Run ceph_adoption playbook.
4. After upgradation, follow any of the below:
   - Add new osd
   - Replace osd

Actual results:
 Cannot add osd using advanced lvm configuration(separate DB with pre-created lvm) 

Expected results:
 Cephadm should manage the advanced lvm configuration.

Comment 6 errata-xmlrpc 2022-05-18 10:38:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:4622


Note You need to log in before you can comment on or make changes to this bug.