Red Hat Bugzilla – Bug 1520004
per host CephAnsibleDisksConfig are ignored
Last modified: 2018-07-16 11:56:35 EDT
Description of problem: I've the following template to do the mapping of my disks: resource_registry: OS::TripleO::CephStorageExtraConfigPre: ./overcloud/puppet/extraconfig/pre_deploy/per_node.yaml parameter_defaults: NodeDataLookup: > { "4C4C4544-0047-3610-8031-C8C04F4A4B32": { "CephAnsibleDisksConfig": { "dedicated_devices": [ "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0", (...) "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0" ], "devices": [ "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc0-lun-0", (...) "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd3-lun-0" ], "osd_scenario": "non-collocated" } }, "4C4C4544-0047-3610-8053-C8C04F484B32": { "CephAnsibleDisksConfig": { "dedicated_devices": [ "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0", (...) "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0" ], "devices": [ "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c0-lun-0", (...) "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d3-lun-0" ], "osd_scenario": "non-collocated" } }, "4C4C4544-0047-3610-8054-C8C04F484B32": { "CephAnsibleDisksConfig": { "dedicated_devices": [ "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0", (...) "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0" ], "devices": [ "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc0-lun-0", (...) "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd1-lun-0", ], "osd_scenario": "non-collocated" } } } And the following extra template: resource_registry: OS::TripleO::NodeUserData: ./first-boot.yaml parameter_defaults: NovaEnableRbdBackend: true CephConfigOverrides: journal_size: 10000 journal_collocation: false raw_multi_journal: true If I go on the first ceph node, I can validate the data are propagated properly with: cat /etc/puppet/hieradata/4C4C4544-0047-3610-8053-C8C04F484B32.json |jq . However, when I check /var/log/mistral/ceph-install-workflow.log, ceph-ansible only know about /dev/vda and ignore my CephAnsibleDisksConfig key. Version-Release number of selected component (if applicable): Puddle: RH7-RHOS-12.0 2017-11-28.3 ceph-ansible-3.0.14-1.el7cp.noarch
This set up comes with 3 ceph nodes. Each of them have 3 SSD and 12 SAS disk. Every-time I do a deployment, at last one of the 3 nodes get a disk renamed. We won't be able to do any deployment anymore with the director installer. So, unless we go through a manual deployment, I don't see any other option to reliably get a working Ceph deployment. I think this will have quite a huge impact and should be mention in the documentation.
osd_auto_discovery would not help here because the problem happens after the first reboot and before the Ceph deployment.
This is blocking partner integration activity. Accordingly I would like to request a hot fix to allow testing to proceed.
Build openstack-tripleo-heat-templates-7.0.3-20.el7ost includes a patch from this bug, please update BZ state accordingly
Build openstack-tripleo-common-7.6.3-9.el7ost contains a patch from this bug please update bz accordingly
Hum, there is something wrong with my set up. My configuration is always ignored but looks good according to the doc [0]. ceph-ansible still tries to access /dev/vdb. [0]: https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/node_specific_hieradata.html This is my file: parameter_defaults: NodeDataLookup: > { "4C4C4544-0047-3610-8031-C8C04F4A4B32": { "dedicated_devices": [ "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633f32-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633f32-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633f32-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633f32-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634632-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634632-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634632-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634632-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0" ], "devices": [ "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc0-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc1-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc3-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc4-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc5-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc6-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc7-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc8-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc9-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fca-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fcb-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd0-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd1-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd3-lun-0" ], "osd_scenario": "non-collocated" }, "4C4C4544-0047-3610-8053-C8C04F484B32": { "dedicated_devices": [ "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636412-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636412-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636412-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636412-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634762-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634762-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634762-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634762-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0" ], "devices": [ "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c0-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c1-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c3-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c4-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c5-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c6-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c7-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c8-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c9-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5ca-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5cb-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d0-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d1-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d3-lun-0" ], "osd_scenario": "non-collocated" }, "4C4C4544-0047-3610-8054-C8C04F484B32": { "dedicated_devices": [ "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476346d2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476346d2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476346d2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476346d2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476351a2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476351a2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476351a2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476351a2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0" ], "devices": [ "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc0-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc1-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc3-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc4-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc5-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc6-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc7-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc8-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc9-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdca-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdcb-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd0-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd1-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd3-lun-0" ], "osd_scenario": "non-collocated" } }
Please ignore my previous comment, I was stuck with an old puddle (RH7-RHOS-12.0 2017-12-01.4).
The verification failed. Ceph ansible failed to deploy the Ceph cluster with the given configuration of the OSDs
Verified with the following configuration: NodeDataLookup: | {"4929BFB8-0ED4-48D7-B34F-9AD615E96112": {"devices": ["/dev/vdb", "/dev/vdc"], "osd_scenario": "collocated"}, "9EFD920F-FC86-4AA4-BBD5-CBD075999C6D": {"devices": ["/dev/vdb"], "dedicated_devices": ["/dev/vdc"], "osd_scenario": "non-collocated"}, "5CCE1DF9-0905-4B7C-A0B9-FEDDB19191C8": {"devices": ["/dev/vdb"], "dedicated_devices": ["/dev/vdc"], "osd_scenario": "non-collocated"}}
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0253
*** Bug 1600856 has been marked as a duplicate of this bug. ***