Hide Forgot
This bug has been copied from bug #1499217 and has been proposed to be backported to 7.4 z-stream (EUS).
Per discussion on related Bug 1499217: > As noted in https://bugzilla.redhat.com/show_bug.cgi?id=1505909, comment #7, > I tested a scratch build with the provided patch and I can now clean errors > by doing "pcs resource cleanup galera-bundle". I can also reprobe the state > of unmanaged resource. > > However, I now face another issue, in that when I "pcs resource manage > galera-bundle" after the cleanup, a restart operation is triggered, which is > unexpected and breaks the idiomatic way of "reprobing the current state of a > resource before gicing back controller to pacemaker". Based on the described actions: pcs resource unmanage galera pcs resource update galera cluster_host_map='ra1:ra1;ra2:ra2;ra3:ra3;foo=foo' pcs resource cleanup galera A restart after galera becomes managed again is expected, due to the resource definition having changed. I would expect that unmanage + cleanup + manage would not trigger a restart.
(In reply to Ken Gaillot from comment #2) > Based on the described actions: > > pcs resource unmanage galera > pcs resource update galera > cluster_host_map='ra1:ra1;ra2:ra2;ra3:ra3;foo=foo' > pcs resource cleanup galera > > A restart after galera becomes managed again is expected, due to the > resource definition having changed. I would expect that unmanage + cleanup + > manage would not trigger a restart. My mistake, the cleanup after the update should prevent the restart. In addition to the commits listed in Bug 1499217 Comment 8, we also needed a small part of upstream commit e3b825a.
I've just tested the scratch build and confirm that all the cleanup tests are working. I also confirm that I no longer see any spurious restart action once I "pcs resource cleanup" unmanaged resource and then "pcs resource manage" it. Thanks!
Instruction for verifying the fix: Let ra1 ra2 and ra3 be the name of your controller nodes. tests consists in making sure that the cleanup: . correctly reprobes the state of resources (even when unmanaged), . doesn't cause any stop or restart action when unnecessary. #1. Ensure that the cleanup works. as mention by ken in comment #2 pcs resource unmanage galera pcs resource update galera cluster_host_map='ra1:ra1;ra2:ra2;ra3:ra3;foo=foo' pcs resource cleanup galera The state of the galera resource should read Master. (previously it was failing to report state and kept in Slave) #2. Give back control to pacemaker, and ensure no restart is triggered pcs resource manage galera No galera replica should be stopped or restarted. Check in the logs that such operation is not scheduled by pacemaker #3. Ensure that the cleanup works when one reprobes the state of the bundle pcs resource unmanage galera-bundle pcs resource cleanup galera-bundle This will unmanage the resource _and_ the container and pacemaker-remote that manages it. A cleanup should succefully reprobe the state of the galera resource as in test #1 #4. Give back control to pacemaker and ensure galera server is not restarted nor is the galera docker container. pcs resource manage galera-bundle Like test #2, no restart should happen. The pid of the galera server would stay unchanged, and "docker ps" would show that the galera docker container is still up.
Verified on: pacemaker-1.1.16-12.el7_4.5.x86_64 Followed steps on comment #6 and did not notice any restart of galera containers or any issues as mentioned above.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:3328