Bug 2215042 - [CEE/SD][ceph-volume]Even though there is enough space in the DB device, the OSDs are not being created after attaching the device
Summary: [CEE/SD][ceph-volume]Even though there is enough space in the DB device, the ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Volume
Version: 5.3
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: 5.3z4
Assignee: Guillaume Abrioux
QA Contact: Aditya Ramteke
Akash Raj
URL:
Whiteboard:
Depends On: 2203397 2239888
Blocks: 2210690
TreeView+ depends on / blocked
 
Reported: 2023-06-14 14:39 UTC by Guillaume Abrioux
Modified: 2023-09-20 15:59 UTC (History)
13 users (show)

Fixed In Version: ceph-16.2.10-184.el8cp
Doc Type: Bug Fix
Doc Text:
.Re-running `ceph-volume lvm batch` command against created devices is now possible Previously, in `ceph-volume`, `lvm` membership was not set for _mpath_ devices like it was for other types of supported devices. Due to this, re-running the `ceph-volume lvm batch` command against already created devices was not possible. With this fix, the `lvm` membership is set for _mpath_ devices and re-running `ceph-volume lvm batch` command against already created devices is now possible.
Clone Of: 2203397
Environment:
Last Closed: 2023-07-19 16:19:11 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 52058 0 None Merged ceph-volume: set lvm membership for mpath type devices 2023-06-26 07:08:43 UTC
Red Hat Issue Tracker RHCEPH-6829 0 None None None 2023-06-14 14:41:02 UTC
Red Hat Product Errata RHBA-2023:4213 0 None None None 2023-07-19 16:20:02 UTC

Comment 1 Scott Ostapovicz 2023-06-14 16:00:16 UTC
Missed the 5.3 z4 deadline.  Moving from z4 to z5.

Comment 14 errata-xmlrpc 2023-07-19 16:19:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.3 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:4213


Note You need to log in before you can comment on or make changes to this bug.