Bug 2160439

Summary: [GSS] osds pods stuck in CLBO after adding 3 new osds
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: amansan <amanzane>
Component: cephAssignee: Michael J. Kidd <linuxkidd>
ceph sub component: RADOS QA Contact: Elad <ebenahar>
Status: CLOSED DUPLICATE Docs Contact:
Severity: high    
Priority: high CC: bniver, hnallurv, kelwhite, linuxkidd, madam, muagarwa, nojha, ocs-bugs, odf-bz-bot, rzarzyns, sostapov, tnielsen, vumrao
Version: 4.10   
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-01-18 13:48:24 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Comment 6 kelwhite 2023-01-12 19:28:52 UTC
Travis, 

Double checking with the customer but I don't believe these are mpath devices. Seems the service is enabled but not configured from dmesg:

Dec 13 10:11:56 localhost multipathd[755]: --------start up--------
Dec 13 10:11:56 localhost multipathd[755]: read /etc/multipath.conf
Dec 13 10:11:56 localhost multipathd[755]: /etc/multipath.conf does not exist, blacklisting all devices.
Dec 13 10:11:56 localhost multipathd[755]: You can run "/sbin/mpathconf --enable" to create
Dec 13 10:11:56 localhost multipathd[755]: /etc/multipath.conf. See man mpathconf(8) for more details

Anyways, having the customer double check by running 'multipath -ll'