Description of problem: director-managed Swift clusters might break if nodes are replaced or new nodes are added. Version-Release number of selected component (if applicable): Probably all. How reproducible: Most likely always when new nodes are added to an existing cluster. Steps to Reproduce: 1. Deploy Swift cluster using multiple nodes and director. 2. Remove one (or more) Swift storage nodes. 3. Redeploy to rebalance the rings. 4. Add new nodes to the existing cluster and deploy them. Actual results: Differing Swift rings (/etc/swift/[account|container|object].[ring.gz|builder]) on new nodes. Expected results: Rings (/etc/swift/[account|container|object].[ring.gz|builder]) are identical on all nodes. Additional info: Note: this concern is based on reading the code in puppet-swift and tripleo-heat-templates, and due to lack of (hardware) resources I was not able to verify this. If it applies (and I'm quite sure that it does), it will have a severe impact on Swift clusters when changing the cluster topology. The rings in Swift define where data is stored within a cluster. They are used both when storing and retrieving data, and also for background process like replicators. Objects might not be found and/or replicators might replicate data in an endless circle, overloading the cluster when rings are not identical on all nodes. When a new node is added by tripleo it will be configured using puppet-swift. There are no existing ring files on new nodes, and therefore no "history". However, the ring-builder depends on the "history" of previous runs, and since there is a different history on already existing nodes it is very likely that the balanced rings are different. AFAICT, there is a ringserver and ringsync class in puppet-swift, to avoid these situations (to ensure every node uses the same ring source), but it is not used in director. A possible (somewhat dirty) workaround is to copy the existing ringfiles to new nodes before puppet runs. I recommend to check the rings on all nodes if they are identical, especially after changing the cluster topology.
Possible workarounds for this: 1. Disable ring building on the nodes, pls see linked patch review 2. Use a customized template and copy the .builder files from another node before puppet runs
@cschwede, do you know if this patch will be backported to OSP 7?
@Dan: No, I don't think this will be backported to OSP7. However, it was backported upstream to Liberty (https://review.openstack.org/#/c/295426/), and is included in OSP8 (just checked the last puddle; it's included in openstack-tripleo-heat-templates/0.8.14-1.el7ost).
Today Giulio and me discussed the next steps to improve Swift support in Director. The idea to solve the issue described in this BZ is to use the ringsync mechanism already provided by puppet-swift. The ring will be managed on one node, and other nodes will fetch the .ring.gz from that node. https://github.com/openstack/puppet-swift/blob/master/manifests/ringsync.pp https://github.com/openstack/puppet-swift/blob/master/manifests/ringserver.pp It is important that is done on a node with the oldest ring files (including the whole history); for example the first node that was deployed. The managing node will also need information about the IPs and devices on all nodes. There are a few more RFE that will be worked on in the future. These are (ordered by prio): https://bugzilla.redhat.com/show_bug.cgi?id=1276691 multi disks on swift node There is already workaround: https://mojo.redhat.com/community/consulting-customer-training/services-innovation-and-incubation/technical-advanced-content/blog/2015/11/02/director-multiple-disks-for-swift-nodes https://bugzilla.redhat.com/show_bug.cgi?id=1303093 Add ability to disable Swift from overcloud deployment https://bugzilla.redhat.com/show_bug.cgi?id=1303093 Permit usage of unmanaged Swift clusters The ideas in the last two RFEs were used for a customer recently. https://bugzilla.redhat.com/show_bug.cgi?id=1320185 Allow for customization of the swift nodes disk topology This would make it possible to deploy a cluster with a more customized setup without manually managing the Swift rings. For example: - different number of disks per node - SSDs for account/containers - different regions and zones based on the datacenter layout.
There is a wrong BZ reference (Thx Thiago!). Correct one: https://bugzilla.redhat.com/show_bug.cgi?id=1320209 Permit usage of unmanaged Swift clusters
This should be probably fixed by the patch from comment #5 + documentation. Future work should be a separate bugzilla. Correct?
This bug did not make the OSP 8.0 release. It is being deferred to OSP 10.
It seems that the workaround from upstream (disabling Ring management) is included in openstack-tripleo-heat-templates-0.8.14-7.el7ost.noarch.rpm (from the GA release puddle)? This doesn't fix the bug itself, but at least there is a known workaround for it.
I see we have the osp7 fix for this ON_QA (bug 1321088) however I don't see osp8 or osp9 clones. Are they needed?
nm I see openstack-tripleo-heat-templates-0.8.14-9.el7ost is available in the channel, and according to comment 12 this would include the needed changes.
Added a link to an upstream patch that actually fixes this issue.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:1245