Hide Forgot
This bug has been copied from bug #1305549 and has been proposed to be backported to 7.3 z-stream (EUS).
I performed some heavy testing for this change and couple of issues came up which prevent to get expected behaviour: - The patch is not complete: when the client connections are killed in early stage of node demotion clients try to reconnect but no master has been elected yet -redis recommendation is to PAUSE client first to make sure the slaves processed the latest replication stream from master and then let clients to reconnect. Mehdi create a patch which I tested and helped to get better results. - The reason why even applying Mehdi's patch I could not observe expected behaviour was the fact that redis node demotion, another node promotion to master, and client killand followed reconnection happened before haproxy gave up on "dead" (demoted) redis node. Everything started to work as expected when I lower health check to 3 tries with 1s interval instead of current 5 checks with 2s interval.
> - The reason why even applying Mehdi's patch I could not observe expected > behaviour was the fact that redis node demotion, another node promotion to > master, and client killand followed reconnection happened before haproxy > gave up on "dead" (demoted) redis node. Everything started to work as > expected when I lower health check to 3 tries with 1s interval instead of > current 5 checks with 2s interval. Forgot to add - that means that redis clients keep connecting to slave node and not the new master since haproxy keeps redirecting it to the demoted node.
Tested patch: https://github.com/ClusterLabs/resource-agents/pull/890
Verrified using resource-agents-3.9.5-82.el7_3.5.x86_64 overcloud0: 1) pcs status: Master/Slave Set: redis-master [redis] Masters: [ overcloud-controller-0 ] Slaves: [ overcloud-controller-1 overcloud-controller-2 ] 2) pcs resource move redis-master overcloud-controller-2 3) less /var/log/ceilometer/central.log 2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination [-] Error sending a heartbeat to coordination backend. 2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination Traceback (most recent call last): 2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination File "/usr/lib/python2.7/site-packages/ceilometer/coordination.py", line 130, in heartbeat 2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination self._coordinator.heartbeat() 2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination File "/usr/lib/python2.7/site-packages/tooz/drivers/redis.py", line 503, in heartbeat 2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination value=self.STILL_ALIVE) 2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination File "/usr/lib64/python2.7/contextlib.py", line 35, in __exit__ 2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination self.gen.throw(type, value, traceback) 2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination File "/usr/lib/python2.7/site-packages/tooz/drivers/redis.py", line 54, in _translate_failures 2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination cause=e) 2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination File "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 763, in raise_with_cause 2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination excutils.raise_with_cause(exc_cls, message, *args, **kwargs) 2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 143, in raise_with_cause 2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination six.raise_from(exc_cls(message, *args, **kwargs), kwargs.get('cause')) 2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination File "/usr/lib/python2.7/site-packages/six.py", line 692, in raise_from 2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination raise value 2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination ToozError: You can't write against a read only slave. 2017-01-24 14:19:40.738 101752 ERROR ceilometer.coordination
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2017-0382.html