Description of problem:
This is a problem described in upstream ironic bug
(plz check that I assigned to right component, I wasn't sure)
It impacts any Ceph deployment which must be done by RHOSP director, including but not limited to HCI. Specifically in the scale lab configuration we have found it necessary to use /dev/disk/by-path names for Ceph OSD devices in YAML deployment configuration files in order to get HCI deploys to work reliably. We expect this to happen in other hardware configurations as well, because Linux does not guarantee stability of block device names (i.e. /dev/sd*[a-z]) across reboots. However, introspection does not report /dev/disk/by-path/ names - consequently it becomes a catch-22 where you have to deploy in order to deploy ;-) When introspection is running, it can easily look in /dev/disk/by-path/ tree and find the softlink pointing to a particular block device and report that, and this would enable admins to generate YAML files that will work for RHOSP-director Ceph deployments first time and every time.
Version-Release number of selected component (if applicable):
RHOSP 10 GA (Newton) is where we first noticed this problem.
Steps to Reproduce:
1. do an introspection
2. openstack baremetal node save <uuid>
no /dev/disk/by-path info reported for each block device
include block device's /dev/disk/by-path name.
posted a potential patch here.
Implementation largely based on Ben's patch (thank you!): https://review.openstack.org/#/c/498489/
Ilya, Thank you for helping to get this feature added. -ben
In puddle 2018-04-06.1
(undercloud) [stack@host01 ~]$ openstack overcloud node introspect --all-manageable --provide
(undercloud) [stack@host01 ~]$ openstack baremetal introspection data save host2 | jq .inventory.disks
"by_path": "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0", <===
"model": "PERC H710"
So it looks like the by_path info is there.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.