Bug 2188246 - Error seen after applied osd specification file
Summary: Error seen after applied osd specification file
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Volume
Version: 5.3
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 6.1z1
Assignee: Guillaume Abrioux
QA Contact: Aditya Ramteke
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2209319 2221020
TreeView+ depends on / blocked
 
Reported: 2023-04-20 09:13 UTC by Yalcin
Modified: 2024-05-21 04:38 UTC (History)
8 users (show)

Fixed In Version: ceph-17.2.6-92.el9cp
Doc Type: Bug Fix
Doc Text:
.Devices already used by Ceph are filtered out in `ceph-volume` Previously, due to a bug, `ceph-volume` would not filter out devices already used by Ceph. Due to this, adding new OSDs with `ceph-volume` failed when using pre-created LVs. With this fix, devices already used by Ceph are filtered out in `ceph-volume` as expected and new OSDs with pre-created LVs can now be added.
Clone Of:
: 2209319 (view as bug list)
Environment:
Last Closed: 2023-08-03 16:45:09 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 51343 0 None Merged ceph-volume: fix a bug in `get_lvm_fast_allocs()` (batch) 2023-08-16 11:17:24 UTC
Red Hat Issue Tracker RHCEPH-6493 0 None None None 2023-04-20 09:13:54 UTC
Red Hat Knowledge Base (Solution) 7012104 0 None None None 2023-05-10 09:42:23 UTC
Red Hat Product Errata RHBA-2023:4473 0 None None None 2023-08-03 16:46:18 UTC

Description Yalcin 2023-04-20 09:13:36 UTC
Created attachment 1958498 [details]
ceph volume log

Created attachment 1958498 [details]
ceph volume log

Description of problem:
Customer was upgraded the cluster from RHCS 4 to RHCS 5. Currently all osds are in unmanaged state... Which is expected https://bugzilla.redhat.com/show_bug.cgi?id=2131230
They are trying to apply an osd config which covers all osds in their cluster for being able to replace faulty osd devices. But when they apply the new config, they get error.

''')): cephadm exited with an error code: 1, stderr:Inferring 
config 
...
Non-zero
 exit code 1 from /bin/podman run --rm --ipc=host --stop-signal=SIGTERM 
--authfile=/etc/ceph/podman-auth.json --net=host --entrypoint 
/usr/sbin/ceph-volume --privileged --group-add=disk --init -e ...
...
...
/bin/podman: stderr --> passed data devices: 0 physical, 8 LVM
/bin/podman: stderr --> relative data size: 1.0
/bin/podman: stderr --> passed block_db devices: 0 physical, 8 LVM
/bin/podman: stderr --> 8 fast allocations != 1 num_osds

Version-Release number of selected component (if applicable):
16.2.10-138.el8cp

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:
Osd spec is getting error

Expected results:
Osd spec applied successfully

Additional info:

Comment 9 Scott Ostapovicz 2023-07-12 12:48:34 UTC
Missed the 6.1 z1 window.  Retargeting to 6.1 z2.

Comment 17 errata-xmlrpc 2023-08-03 16:45:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:4473


Note You need to log in before you can comment on or make changes to this bug.