Bug 1469452 - [RFE][Swift] Use Ansible to create and update Swift rings
[RFE][Swift] Use Ansible to create and update Swift rings
Status: ON_DEV
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-common (Show other bugs)
13.0 (Queens)
Unspecified Unspecified
medium Severity unspecified
: Upstream M1
: 14.0 (Rocky)
Assigned To: Christian Schwede (cschwede)
Mike Abrams
Kim Nylander
: FutureFeature, Triaged
Depends On: 1469435
  Show dependency treegraph
Reported: 2017-07-11 05:49 EDT by Christian Schwede (cschwede)
Modified: 2018-03-08 10:15 EST (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

External Trackers
Tracker ID Priority Status Summary Last Updated
Launchpad 1741875 None None None 2018-01-08 05:46 EST
OpenStack gerrit 529708 None None None 2017-12-21 16:35 EST

  None (edit)
Description Christian Schwede (cschwede) 2017-07-11 05:49:50 EDT
As an operator I want to ensure all available storage disks (except root) are used by default on Swift storage nodes.

As of today Swift uses only a directory on the root filesystem by default. This can be changed by using the SwiftRawDisks parameter; however there are some drawbacks here: 

1. All nodes must have these defined disks (ie it is not possible to mix storage nodes with a different number of disks)
2. Weights for all disks are set to 100 and do not reflect the available space. This is overwritten on overcloud updates if changed manually.
3. All nodes and disks are grouped into a single region & zone

With the recent changes in Tripleo/Mistral it is now possible to use the node IP addresses when using Ansible playbooks in the deployment workflow. Ansible itself creates an inventory of available hardware, which includes disks, their size, and type (SSD/rotational). Based on this data it is possible to execute an Ansible playbook executing the following tasks:

1. Gather disk data of storage nodes
2. Add all found non-root disks to the rings with an appropriate weight
3. Upload these rings to the undercloud container which will be fetched later on by all nodes

If an operator changes device weights, region or zone it shouldn't be modified by the playbook. The playbook will ignore existing devices on subsequent runs therefore, only adding missing devices.

Work items
1. Add option to change region/zone for an existing ring (without deleting & re-adding disks, which is required today)
2. Implement the Ansible playbook and include this into the deployment workflow
3. Disable ring management by puppet-swift
Comment 14 Christian Schwede (cschwede) 2018-03-08 05:11:15 EST
We should change the goal for this one slightly and change the disk layout building from the existing patch.

To make deployment options within TripleO consistent, we should use the same (or very similar) config settings to define disks for Swift as we already do for Ceph. 

We should be able to use config settings similar to Ceph, for example like this:

  OS::TripleO::SwiftStorageExtraConfigPre: ./overcloud/puppet/extraconfig/pre_deploy/per_node.yaml                     
  NodeDataLookup: >                                                                                                   
      "4C4C4544-0047-3610-8031-C8C04F4A4B32": {                                                                       
        "SwiftAnsibleDisksConfig": {                                                                                   
          "object_devices": [                                                                                      
          "devices": [                                                                                                
          "region": 2,
          "zone": 5
      "4C4C4544-0047-3610-8053-C8C04F484B32": {
       "SwiftAnsibleDisksConfig": {                                                                                   
          "object_devices": [                                                                                      
          "devices": [                                                                                                

At the end of the day this will result in an improved deployment, consistent operator settings, and a cleaner patch without using the Ansible inventory to gather disks (which is not the OOO way).

Note You need to log in before you can comment on or make changes to this bug.