Bug 2188246

Summary: Error seen after applied osd specification file
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Yalcin <yalbayra>
Component: Ceph-VolumeAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED ERRATA QA Contact: Aditya Ramteke <aramteke>
Severity: high Docs Contact: Akash Raj <akraj>
Priority: unspecified    
Version: 5.3CC: akraj, ceph-eng-bugs, cephqe-warriors, gabrioux, gjose, sostapov, tserlin, vereddy
Target Milestone: ---   
Target Release: 6.1z1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-17.2.6-92.el9cp Doc Type: Bug Fix
Doc Text:
.Devices already used by Ceph are filtered out in `ceph-volume` Previously, due to a bug, `ceph-volume` would not filter out devices already used by Ceph. Due to this, adding new OSDs with `ceph-volume` failed when using pre-created LVs. With this fix, devices already used by Ceph are filtered out in `ceph-volume` as expected and new OSDs with pre-created LVs can now be added.
Story Points: ---
Clone Of:
: 2209319 (view as bug list) Environment:
Last Closed: 2023-08-03 16:45:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2209319, 2221020    

Description Yalcin 2023-04-20 09:13:36 UTC
Created attachment 1958498 [details]
ceph volume log

Created attachment 1958498 [details]
ceph volume log

Description of problem:
Customer was upgraded the cluster from RHCS 4 to RHCS 5. Currently all osds are in unmanaged state... Which is expected https://bugzilla.redhat.com/show_bug.cgi?id=2131230
They are trying to apply an osd config which covers all osds in their cluster for being able to replace faulty osd devices. But when they apply the new config, they get error.

''')): cephadm exited with an error code: 1, stderr:Inferring 
config 
...
Non-zero
 exit code 1 from /bin/podman run --rm --ipc=host --stop-signal=SIGTERM 
--authfile=/etc/ceph/podman-auth.json --net=host --entrypoint 
/usr/sbin/ceph-volume --privileged --group-add=disk --init -e ...
...
...
/bin/podman: stderr --> passed data devices: 0 physical, 8 LVM
/bin/podman: stderr --> relative data size: 1.0
/bin/podman: stderr --> passed block_db devices: 0 physical, 8 LVM
/bin/podman: stderr --> 8 fast allocations != 1 num_osds

Version-Release number of selected component (if applicable):
16.2.10-138.el8cp

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:
Osd spec is getting error

Expected results:
Osd spec applied successfully

Additional info:

Comment 9 Scott Ostapovicz 2023-07-12 12:48:34 UTC
Missed the 6.1 z1 window.  Retargeting to 6.1 z2.

Comment 17 errata-xmlrpc 2023-08-03 16:45:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:4473