As an operator I want to ensure all available storage disks (except root) are used by default on Swift storage nodes. As of today Swift uses only a directory on the root filesystem by default. This can be changed by using the SwiftRawDisks parameter; however there are some drawbacks here: 1. All nodes must have these defined disks (ie it is not possible to mix storage nodes with a different number of disks) 2. Weights for all disks are set to 100 and do not reflect the available space. This is overwritten on overcloud updates if changed manually. 3. All nodes and disks are grouped into a single region & zone With the recent changes in Tripleo/Mistral it is now possible to use the node IP addresses when using Ansible playbooks in the deployment workflow. Ansible itself creates an inventory of available hardware, which includes disks, their size, and type (SSD/rotational). Based on this data it is possible to execute an Ansible playbook executing the following tasks: 1. Gather disk data of storage nodes 2. Add all found non-root disks to the rings with an appropriate weight 3. Upload these rings to the undercloud container which will be fetched later on by all nodes If an operator changes device weights, region or zone it shouldn't be modified by the playbook. The playbook will ignore existing devices on subsequent runs therefore, only adding missing devices. Work items ---------- 1. Add option to change region/zone for an existing ring (without deleting & re-adding disks, which is required today) 2. Implement the Ansible playbook and include this into the deployment workflow 3. Disable ring management by puppet-swift
We should change the goal for this one slightly and change the disk layout building from the existing patch. To make deployment options within TripleO consistent, we should use the same (or very similar) config settings to define disks for Swift as we already do for Ceph. We should be able to use config settings similar to Ceph, for example like this: resource_registry: OS::TripleO::SwiftStorageExtraConfigPre: ./overcloud/puppet/extraconfig/pre_deploy/per_node.yaml parameter_defaults: NodeDataLookup: > { "4C4C4544-0047-3610-8031-C8C04F4A4B32": { "SwiftAnsibleDisksConfig": { "object_devices": [ "/dev/sdd": "/dev/sde", "/dev/sdf", "/dev/sdg", ], "devices": [ "/dev/sdb", "/dev/sdc" ], "region": 2, "zone": 5 } }, "4C4C4544-0047-3610-8053-C8C04F484B32": { "SwiftAnsibleDisksConfig": { "object_devices": [ "/dev/sdd", "/dev/sde", "/dev/sdf", "/dev/sdg", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0", "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0" ], "devices": [ "/dev/sdb", "/dev/sdc" ], } }, }, At the end of the day this will result in an improved deployment, consistent operator settings, and a cleaner patch without using the Ansible inventory to gather disks (which is not the OOO way).