Hide Forgot
Created attachment 1221011 [details] pcs cluster cib Description of problem: The cluster has two nodes. Ten lsb resources, named "resource_ccg_gid1"-"resource_ccg_gid10" were started on one cluster node. The resource agents are shell scripts which start our own application program. When the node was taken down on purpose simulating a power failure, all resources but "resource_ccg_gid1" migrated and ran correctly on the peer node. In the log file attached, "resource_ccg_gid1" was started on the peer node around Nov 15 16:48:30. We saw in the log file of our application program that about 13 seconds after it started, it received a "stop" command from the cluster. In all the resource creations, we set "op start timeout=45s". "pcs status resources" showed the resource as FAILED (verbatim) for a short interval, and then as "Stopped". The files "corosync.log" from the node where the resource failed was attached, as well as the output of "pcs cluster cib" was also attached. If more than ten resources are started and then migrated in a similar fashion, the number of resources which fail to start on the peer node is higher. Version-Release number of selected component (if applicable): 1.1.13-10.el7_2.4.x86_64
Created attachment 1221013 [details] corosync.log
Created attachment 1221028 [details] corosync.log zipped We realized that we could zip the log file :-( and reduce it to a much smaller size. The file uploaded on the Google drive will be removed because it is useless.
I believe this is simply a matter of the resource agent design, made visible by the load created when everything is failing over at once. It is important that any OCF or LSB resource agent not return success from start until the service is fully operational, which in pacemaker terms means that it would pass a monitor call. I suspect that your agent is returning from the start once the relevant process has been started, but it has some start-up time before it is ready to handle requests. The logs show: Nov 15 16:48:30 [1675] uis1ccg1.uismag.com crmd: info: match_graph_event: Action resource_ccg_gid1_stop_0 (18) confirmed on uis1ccg2-app (rc=0) -> The resource stops cleanly on old node Nov 15 16:48:30 [1675] uis1ccg1.uismag.com crmd: notice: process_lrm_event: Operation resource_ccg_gid1_start_0: ok (node=uis1ccg1-app, call=1653, rc=0, cib-update=1890, confirmed=true) -> and starts cleanly on new node Nov 15 16:48:30 [1675] uis1ccg1.uismag.com crmd: info: process_lrm_event: Operation resource_ccg_gid1_monitor_10000: not running (node=uis1ccg1-app, call=1654, rc=7, cib-update=1891, confirmed=false) -> but monitor reports that it is not running Because the resource has migration-threshold=1, pacemaker will not try to restart the resource on the same node, and because the other node is down, there's nothing it can do but leave the resource stopped. FYI, this appears to be a "soft poweroff" of the other node, which allows the OS to catch the signal and cleanly shut down. That is a good scenario to test, but it does not simulate a power failure. For that, you need to test a "hard poweroff" that instantly destroys the VM.
(In reply to Ken Gaillot from comment #4) > I believe this is simply a matter of the resource agent design, made visible > by the load created when everything is failing over at once. > > It is important that any OCF or LSB resource agent not return success from > start until the service is fully operational, which in pacemaker terms means > that it would pass a monitor call. I suspect that your agent is returning > from the start once the relevant process has been started, but it has some > start-up time before it is ready to handle requests. > > The logs show: > > Nov 15 16:48:30 [1675] uis1ccg1.uismag.com crmd: info: > match_graph_event: Action resource_ccg_gid1_stop_0 (18) confirmed on > uis1ccg2-app (rc=0) > -> The resource stops cleanly on old node > > Nov 15 16:48:30 [1675] uis1ccg1.uismag.com crmd: notice: > process_lrm_event: Operation resource_ccg_gid1_start_0: ok > (node=uis1ccg1-app, call=1653, rc=0, cib-update=1890, confirmed=true) > -> and starts cleanly on new node > > Nov 15 16:48:30 [1675] uis1ccg1.uismag.com crmd: info: > process_lrm_event: Operation resource_ccg_gid1_monitor_10000: not > running (node=uis1ccg1-app, call=1654, rc=7, cib-update=1891, > confirmed=false) > -> but monitor reports that it is not running > > Because the resource has migration-threshold=1, pacemaker will not try to > restart the resource on the same node, and because the other node is down, > there's nothing it can do but leave the resource stopped. > > FYI, this appears to be a "soft poweroff" of the other node, which allows > the OS to catch the signal and cleanly shut down. That is a good scenario to > test, but it does not simulate a power failure. For that, you need to test a > "hard poweroff" that instantly destroys the VM. Thank you very much, Mr. Ken! Yes, indeed, this is precisely what happens: "the agent is returning from the start once the relevant process has been started, but it has some start-up time before it is ready to handle requests". However!!!, I thought that this situation would be handled by the cluster, if we set the value of "op start timeout" sufficiently high, which we did (timeout=45s). The documentation seems to indicate that this is the solution in case the lsb resource may take a longer time to respond to the "status" request: "If you find that your system includes a resource that takes a long time to start or stop or perform a non-recurring monitor action at startup, and requires more time than the system allows before declaring that the start action has failed, you can increase this value from the default of 20 or the value of timeout in "op defaults"". (6.6 Resource Operation) My interpretation of this text is that the cluster software should issue "monitor" commands to the resource for 45s, every "op monitor interval" seconds ("op monitor interval" was set to 10s in our case, I believe). I don't think that the cluster software should check the resource just once in the same second after it started it, and irrevocably declare that the resource is "not running" -- this seems to contradict 6.6 from the manual. But maybe the cluster developers' expectation was that the resource agent would block until it is ready to give a definitive response to the "status" command. In this case, the manual is vague and incomplete in explaining what the resource agent is expected to do at startup. Nonetheless, I will change our resource agent script not to respond to the "status" request before it is ready to respond with "result: 0", if you think that this may fix the issue. Maybe this will address https://bugzilla.redhat.com/show_bug.cgi?id=1395508 ??? We set migration-threshold=1 on purpose, because, it make no sense to run over and over into the same problem as described above. We may have done a "soft power off" -- run a "reboot" command at the shell prompt -- I don't really remember, but we ran subsequent tests where we turned the VM power off from the VCENTER; this is the most "brutal" technique which seems to be available short of unplugging the power cable of the underlying physical machine :-).
(In reply to vlad.socaciu from comment #5) > (In reply to Ken Gaillot from comment #4) > > I believe this is simply a matter of the resource agent design, made visible > > by the load created when everything is failing over at once. > > > > It is important that any OCF or LSB resource agent not return success from > > start until the service is fully operational, which in pacemaker terms means > > that it would pass a monitor call. I suspect that your agent is returning > > from the start once the relevant process has been started, but it has some > > start-up time before it is ready to handle requests. > > > > The logs show: > > > > Nov 15 16:48:30 [1675] uis1ccg1.uismag.com crmd: info: > > match_graph_event: Action resource_ccg_gid1_stop_0 (18) confirmed on > > uis1ccg2-app (rc=0) > > -> The resource stops cleanly on old node > > > > Nov 15 16:48:30 [1675] uis1ccg1.uismag.com crmd: notice: > > process_lrm_event: Operation resource_ccg_gid1_start_0: ok > > (node=uis1ccg1-app, call=1653, rc=0, cib-update=1890, confirmed=true) > > -> and starts cleanly on new node > > > > Nov 15 16:48:30 [1675] uis1ccg1.uismag.com crmd: info: > > process_lrm_event: Operation resource_ccg_gid1_monitor_10000: not > > running (node=uis1ccg1-app, call=1654, rc=7, cib-update=1891, > > confirmed=false) > > -> but monitor reports that it is not running > > > > Because the resource has migration-threshold=1, pacemaker will not try to > > restart the resource on the same node, and because the other node is down, > > there's nothing it can do but leave the resource stopped. > > > > FYI, this appears to be a "soft poweroff" of the other node, which allows > > the OS to catch the signal and cleanly shut down. That is a good scenario to > > test, but it does not simulate a power failure. For that, you need to test a > > "hard poweroff" that instantly destroys the VM. > > Thank you very much, Mr. Ken! > > Yes, indeed, this is precisely what happens: "the agent is returning from > the start once the relevant process has been started, but it has some > start-up time before it is ready to handle requests". > > However!!!, I thought that this situation would be handled by the cluster, > if we set the value of "op start timeout" sufficiently high, which we did > (timeout=45s). The documentation seems to indicate that this is the solution > in case the lsb resource may take a longer time to respond to the "status" > request: > > "If you find that your system includes a resource that takes a long time to > start or stop or perform a non-recurring monitor action at startup, and > requires more time than the system allows before declaring that the start > action has failed, you can increase this value from the default of 20 or the > value of timeout in "op defaults"". (6.6 Resource Operation) The part you're missing is that start MUST only return at the point the underlying daemon is fully functional. If you want the status/monitor to be given longer, you need to set it's timer higher, not the start operation's
(In reply to vlad.socaciu from comment #5) > Yes, indeed, this is precisely what happens: "the agent is returning from > the start once the relevant process has been started, but it has some > start-up time before it is ready to handle requests". > > However!!!, I thought that this situation would be handled by the cluster, > if we set the value of "op start timeout" sufficiently high, which we did > (timeout=45s). The documentation seems to indicate that this is the solution > in case the lsb resource may take a longer time to respond to the "status" > request: > > "If you find that your system includes a resource that takes a long time to > start or stop or perform a non-recurring monitor action at startup, and > requires more time than the system allows before declaring that the start > action has failed, you can increase this value from the default of 20 or the > value of timeout in "op defaults"". (6.6 Resource Operation) > > My interpretation of this text is that the cluster software should issue > "monitor" commands to the resource for 45s, every "op monitor interval" > seconds ("op monitor interval" was set to 10s in our case, I believe). > > I don't think that the cluster software should check the resource just once > in the same second after it started it, and irrevocably declare that the > resource is "not running" -- this seems to contradict 6.6 from the manual. > But maybe the cluster developers' expectation was that the resource agent > would block until it is ready to give a definitive response to the "status" > command. In this case, the manual is vague and incomplete in explaining what > the resource agent is expected to do at startup. I can see how the manual could be confusing, so I'll look into clarifying it. Yes, the cluster expects that monitor should return OK immediately after start returns OK. The start timeout is not the time until the first monitor, but the time given for the start itself to complete and return success or failure. If the start does not complete in that time, its process is killed, and it is considered failed. If the start completes before the timeout, the remaining timeout is ignored, and a monitor may immediately be done. > Nonetheless, I will change our resource agent script not to respond to the > "status" request before it is ready to respond with "result: 0", if you > think that this may fix the issue. Maybe this will address > https://bugzilla.redhat.com/show_bug.cgi?id=1395508 ??? It's unlikely -- if I remember correctly, the monitors that failed in that case were not immediately after a start. I think that one is more likely due to CPU load. > We set migration-threshold=1 on purpose, because, it make no sense to run > over and over into the same problem as described above. Sure, the option is there so people can adapt it to their situation :) > We may have done a "soft power off" -- run a "reboot" command at the shell > prompt -- I don't really remember, but we ran subsequent tests where we > turned the VM power off from the VCENTER; this is the most "brutal" > technique which seems to be available short of unplugging the power cable of > the underlying physical machine :-). Sounds good :)
Hi, Were you able to modify the resource agent and test again?
(In reply to Ken Gaillot from comment #8) > Hi, > > Were you able to modify the resource agent and test again? We modified the agents in November, when most of the discussions took place. Since then, I think I've tried the scenario once or twice (taking a node down while the resources were running on that node), but everything worked as expected. So, this case could be closed. Thank you very much again for the patience and good advice!
Great, thanks :)