Bug 1311597
Summary: | Nonoptimal failover strategy can lead to RPC timeout | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Marian Krcmarik <mkrcmari> |
Component: | python-oslo-messaging | Assignee: | Victor Stinner <vstinner> |
Status: | CLOSED ERRATA | QA Contact: | Udi Shkalim <ushkalim> |
Severity: | urgent | Docs Contact: | |
Priority: | high | ||
Version: | 7.0 (Kilo) | CC: | apevec, dnavale, fpercoco, jschluet, lhh, michele, mkrcmari, oblaut, pablo.iranzo, rcernin, srevivo, ushkalim, vstinner |
Target Milestone: | async | Keywords: | AutomationBlocker, ZStream |
Target Release: | 7.0 (Kilo) | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | python-oslo-messaging-1.8.3-6.el7ost | Doc Type: | Bug Fix |
Doc Text: |
Oslo Messaging used the 'shuffle' strategy to select a RabbitMQ host from the list of RabbitMQ servers. When a node of the cluster running RabbitMQ was restarted, each OpenStack service connected to this server reconnected to a new RabbitMQ server. Unfortunately, this strategy does not handle dead RabbitMQ servers correctly; it can try to connect to the same dead server multiple times in a row. The strategy also leads to increased reconnection time, and sometimes may lead to RPC operations timing out because no guarantee is provided on how long the reconnection process will take.
With this update, Oslo Messaging uses the 'round-robin' strategy to select a RabbitMQ host. This strategy provides the least achievable reconnection time and avoids RPC timeout when a node is restarted. It also guarantees that if K of N RabbitMQ hosts are alive, it will take at most N - K + 1 attempts to successfully reconnect to the RabbitMQ cluster.
|
Story Points: | --- |
Clone Of: | 1302391 | Environment: | |
Last Closed: | 2017-01-19 13:27:11 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1302391 | ||
Bug Blocks: |
Comment 1
Flavio Percoco
2016-02-24 20:35:53 UTC
(In reply to Flavio Percoco from comment #1) > This patch doesn't apply cleanly and it seems to conflict with a previous > backport. How much of this is really needed for OSP7? And how far down in > the releases are we expecting to go? > > I'm also not super happy with this backport because it adds a new config > option, which is not something we normally do on backports. RHOS7 and RHOS8 behave the same in this regard, The impact would be following: in some situations when one of the controllers goes down (especially controller-0 (first one in config)), rabbitmq clients *sometimes* (previously connected to this node) take time to reconnect since they keep trying to connect to dead rabbitmq server. It does not happen always and Usually It takes up to several minutes (personally experienced almost 5 minutes the most). I would leave the decision for the PM (not sure who the right person is) to decide. Honestly I am not sure myself how far down we are supposed to go. Verified on python-oslo-messaging-1.8.3-6.el7ost Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2017-0158.html |