| Summary: | Redis resource-agent should notify clients of master being demoted | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Marcel Kolaja <mkolaja> |
| Component: | resource-agents | Assignee: | Oyvind Albrigtsen <oalbrigt> |
| Status: | CLOSED ERRATA | QA Contact: | Asaf Hirshberg <ahirshbe> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 7.4 | CC: | agk, apevec, cfeist, cluster-maint, dbecker, ebarrera, fdinitto, jcoufal, jherrman, k-akatsuka, lhh, mabaakou, mburns, mjuricek, mkrcmari, mnovacek, morazi, oalbrigt, oblaut, rhel-osp-director-maint, royoung, yprokule |
| Target Milestone: | rc | Keywords: | ZStream |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | resource-agents-3.9.5-82.el7_3.6 | Doc Type: | Bug Fix |
| Doc Text: |
Prior to this update, the redis resource agent did not notify the client if a master node was changed into a slave mode. In some cases, this caused temporary unavailability of data on Red Hat OpenStack Platform. This update fixes the underlying code, and the described problem no longer occurs.
|
Story Points: | --- |
| Clone Of: | 1305549 | Environment: | |
| Last Closed: | 2017-03-02 17:12:11 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Bug Depends On: | 1305549, 1414967 | ||
| Bug Blocks: | |||
|
Description
Marcel Kolaja
2016-11-30 13:17:28 UTC
I performed some heavy testing for this change and couple of issues came up which prevent to get expected behaviour: - The patch is not complete: when the client connections are killed in early stage of node demotion clients try to reconnect but no master has been elected yet -redis recommendation is to PAUSE client first to make sure the slaves processed the latest replication stream from master and then let clients to reconnect. Mehdi create a patch which I tested and helped to get better results. - The reason why even applying Mehdi's patch I could not observe expected behaviour was the fact that redis node demotion, another node promotion to master, and client killand followed reconnection happened before haproxy gave up on "dead" (demoted) redis node. Everything started to work as expected when I lower health check to 3 tries with 1s interval instead of current 5 checks with 2s interval. > - The reason why even applying Mehdi's patch I could not observe expected
> behaviour was the fact that redis node demotion, another node promotion to
> master, and client killand followed reconnection happened before haproxy
> gave up on "dead" (demoted) redis node. Everything started to work as
> expected when I lower health check to 3 tries with 1s interval instead of
> current 5 checks with 2s interval.
Forgot to add - that means that redis clients keep connecting to slave node and not the new master since haproxy keeps redirecting it to the demoted node.
Tested patch: https://github.com/ClusterLabs/resource-agents/pull/890 Verrified using resource-agents-3.9.5-82.el7_3.5.x86_64
overcloud0:
1) pcs status:
Master/Slave Set: redis-master [redis]
Masters: [ overcloud-controller-0 ]
Slaves: [ overcloud-controller-1 overcloud-controller-2 ]
2) pcs resource move redis-master overcloud-controller-2
3) less /var/log/ceilometer/central.log
2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination [-] Error sending a heartbeat to coordination backend.
2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination Traceback (most recent call last):
2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination File "/usr/lib/python2.7/site-packages/ceilometer/coordination.py", line 130, in heartbeat
2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination self._coordinator.heartbeat()
2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination File "/usr/lib/python2.7/site-packages/tooz/drivers/redis.py", line 503, in heartbeat
2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination value=self.STILL_ALIVE)
2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination File "/usr/lib64/python2.7/contextlib.py", line 35, in __exit__
2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination self.gen.throw(type, value, traceback)
2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination File "/usr/lib/python2.7/site-packages/tooz/drivers/redis.py", line 54, in _translate_failures
2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination cause=e)
2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination File "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 763, in raise_with_cause
2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination excutils.raise_with_cause(exc_cls, message, *args, **kwargs)
2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 143, in raise_with_cause
2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination six.raise_from(exc_cls(message, *args, **kwargs), kwargs.get('cause'))
2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination File "/usr/lib/python2.7/site-packages/six.py", line 692, in raise_from
2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination raise value
2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination ToozError: You can't write against a read only slave.
2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2017-0382.html |