Description of problem: We have found out that ironic doesn't have to provide consistent disk names and disk paths over multiple introspections in bug #1781801 . As a result, customer may want to use director deploy Ceph nodes on top of hardware with inconsistent disk drive names. The only option such customers have is to have per-node disk mappings and update them with every OSD failure. It looks like similar approach was already requested in ceph-ansible bug #1438590 (Closed as Won't Fix). I am wondering if it is possible to add to THT something close to root device hints: allow describing OSDs data and journal disk drives for ceph by specifying their model, size and so on?
(In reply to Alex Stupnikov from comment #0) > Description of problem: > > We have found out that ironic doesn't have to provide consistent disk names > and disk paths over multiple introspections in bug #1781801 . Yes, that's just now the by-name driver works. As described in the RHEL7 Storage Administrator Guide chapter on Persistent Naming: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/storage_administration_guide/persistent_naming Storage devices managed by the sd driver may not always have the same name across reboots so a disk normally identified by /dev/sdc may be named /dev/sdb. Itβs also possible for the replacement disk of /dev/sdc to present itself to the operating system as /dev/sdd even if your intent is to use it as a replacement for /dev/sdc. To address this, the same guide offers alternative names which are persistent and match the pattern /dev/disk/by-*. > As a result, > customer may want to use director deploy Ceph nodes on top of hardware with > inconsistent disk drive names. > > The only option such customers have is to have per-node disk mappings and > update them with every OSD failure. It looks like similar approach was > already requested in ceph-ansible bug #1438590 (Closed as Won't Fix). > > I am wondering if it is possible to add to THT something close to root > device hints: allow describing OSDs data and journal disk drives for ceph by > specifying their model, size and so on? So you want a syntax in THT to use names which are persistent and match the pattern /dev/disk/by-*. Yes, that would be the node-specific override feature: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html-single/deploying_an_overcloud_with_containerized_red_hat_ceph/index#map_disk_layout_non-homogen_ceph If you use by-path as in the above you'll have names which persist reboots. The above can be tricky to configure so as per bug 1796191 we're shipping a tool to make it easier to generate the per-node disk mappings. See docbug 1796197 for more info on how to use it. This tool has already shipped in OSP16 and you should see it on your undercloud here: [stack@undercloud-0 ~]$ ls -l /usr/share/openstack-tripleo-heat-templates/tools/make_ceph_disk_list.py -rwxr-xr-x. 1 root root 5948 Feb 26 14:24 /usr/share/openstack-tripleo-heat-templates/tools/make_ceph_disk_list.py [stack@undercloud-0 ~]$ See also bug 1636508 which adds device class support to ceph-ansible for more advanced cases where you're configuring custom crush maps. When ceph-ansible supports this we'll have a method to user in in OSP16.1+ *** This bug has been marked as a duplicate of bug 1796191 ***