Setting --property root_device='{"hctl": "<number>"}' isn't always respected. Environment: openstack-ironic-conductor-10.1.2-0.20180302010846.7cd9deb.el7ost.noarch instack-undercloud-8.3.1-0.20180304032746.fc5704f.el7ost.noarch puppet-ironic-12.3.1-0.20180221115553.12ab03d.el7ost.noarch python2-ironicclient-2.2.0-0.20180211230646.683b7c6.el7ost.noarch openstack-ironic-common-10.1.2-0.20180302010846.7cd9deb.el7ost.noarch openstack-ironic-staging-drivers-0.9.0-0.20180220235748.de59d74.el7ost.noarch openstack-ironic-api-10.1.2-0.20180302010846.7cd9deb.el7ost.noarch openstack-ironic-inspector-7.2.1-0.20180302142656.397a98a.el7ost.noarch python-ironic-inspector-client-3.1.0-0.20180213173236.c82b59f.el7ost.noarch python-ironic-lib-2.12.0-0.20180213172054.831c55b.el7ost.noarch (undercloud) [stack@wshed-director noqs]$ openstack baremetal introspection data save c14u25b05 |jq ".inventory.disks" [ { "size": 146815733760, "serial": "50000395a8394fd0", "wwn": "0x50000395a8394fd0", "rotational": true, "vendor": "TOSHIBA", "name": "/dev/sda", "wwn_vendor_extension": null, "hctl": "0:0:0:0", "wwn_with_extension": "0x50000395a8394fd0", "by_path": "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:0:0:0", "model": "MK1401GRRB" }, { "size": 146815733760, "serial": "50000395a8394f90", "wwn": "0x50000395a8394f90", "rotational": true, "vendor": "TOSHIBA", "name": "/dev/sdb", "wwn_vendor_extension": null, "hctl": "0:0:1:0", "wwn_with_extension": "0x50000395a8394f90", "by_path": "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:0:1:0", "model": "MK1401GRRB" } ] So I configured root_device with the following command: openstack baremetal node set --resource-class OPENSHIFTWORKER c14u25b06 --property root_device='{"hctl": "0:0:0:0"}' Attempted to deploy OC and the node still booted with another disk (had an OS there from previous deployment). I ended up setting: openstack baremetal node set --resource-class OPENSHIFTWORKER c14u25b05 --property root_device='{"serial": "50000395a8394fd0"}' And the OC got deployed right (choosed the right disk).
Could you please try wiping the hard drives and seeing if the problem persists? Something like this may happen when cleaning is disabled.
At some point reproduced the problem even with using serial on the same setup. After cleaning the metadata the issue didn't reproduce.
> After cleaning the metadata the issue didn't reproduce. I think this proves the proves the issue was just that the node was not cleaned, correct?
Sasha - would like to close since works fine when cleaned, OK?
I think we can close it, it's a known bug with cleaning disabled.