Bug 2060670

Summary: No resources in a cloned group instance can run if a single resource in the instance fails to start
Product: Red Hat Enterprise Linux 8 Reporter: Reid Wahl <nwahl>
Component: pacemakerAssignee: Ken Gaillot <kgaillot>
Status: CLOSED MIGRATED QA Contact: cluster-qe <cluster-qe>
Severity: medium Docs Contact:
Priority: low    
Version: 8.5CC: cluster-maint
Target Milestone: rcKeywords: MigratedToJIRA, Triaged
Target Release: ---Flags: pm-rhel: mirror+
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-09-22 19:18:57 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Reid Wahl 2022-03-03 23:51:18 UTC
Description of problem:

Consider a cloned resource group "test_grp" with resources dummy1 and dummy2.
  - dummy1 is an ocf:heartbeat:Dummy resource that can start with no issue.
  - dummy2 is an ocf:heartbeat:Dummy_fail resource (modified Dummy resource agent that returns OCF_ERR_GENERIC on start).

When dummy2 fails to start on a node, dummy1 is also prohibited from starting on any node. Pacemaker considers the entire group test_grp to have failed to start on that node, instead of considering only dummy2 to have failed to start.

This does not happen with non-cloned resource groups. It seems wrong for it to happen with cloned resource groups.

In non-cloned resource groups, if dummy2 fails to start, dummy1 can still run.

-----

Minimal demo:

Resource configuration (no constraints):

Resources:
 Group: test_grp
  Resource: dummy1 (class=ocf provider=heartbeat type=Dummy)
  Resource: dummy2 (class=ocf provider=heartbeat type=Dummy_fail)


# # Non-cloned group behavior:
# pcs resource create dummy1 ocf:heartbeat:Dummy --group test_grp
# pcs resource create dummy2 ocf:heartbeat:Dummy_fail --group test_grp
# pcs status --full
...
Node List:
  * Online: [ node1 (1) node2 (2) ]

Full List of Resources:
  * xvm	(stonith:fence_xvm):	 Started node1
  * Resource Group: test_grp:
    * dummy1	(ocf:heartbeat:Dummy):	 Started node2
    * dummy2	(ocf:heartbeat:Dummy_fail):	 Stopped

Migration Summary:
  * Node: node1 (1):
    * dummy2: migration-threshold=1000000 fail-count=1000000 last-failure='Thu Mar  3 15:42:48 2022'
  * Node: node2 (2):
    * dummy2: migration-threshold=1000000 fail-count=1000000 last-failure='Thu Mar  3 15:42:48 2022'

Failed Resource Actions:
  * dummy2_start_0 on node1 'error' (1): call=57, status='complete', last-rc-change='Thu Mar  3 15:42:48 2022', queued=0ms, exec=11ms
  * dummy2_start_0 on node2 'error' (1): call=40, status='complete', last-rc-change='Thu Mar  3 15:42:48 2022', queued=0ms, exec=11ms


# # Cloned group behavior
# pcs resource clone test_grp && pcs resource cleanup
# pcs status --full
...
Node List:
  * Online: [ node1 (1) node2 (2) ]

Full List of Resources:
  * xvm	(stonith:fence_xvm):	 Started node1
  * Clone Set: test_grp-clone [test_grp]:
    * Resource Group: test_grp:0:
      * dummy1	(ocf:heartbeat:Dummy):	 Stopped
      * dummy2	(ocf:heartbeat:Dummy_fail):	 Stopped
    * Resource Group: test_grp:1:
      * dummy1	(ocf:heartbeat:Dummy):	 Stopped
      * dummy2	(ocf:heartbeat:Dummy_fail):	 Stopped

Migration Summary:
  * Node: node1 (1):
    * dummy2: migration-threshold=1000000 fail-count=1000000 last-failure='Thu Mar  3 15:44:34 2022'
  * Node: node2 (2):
    * dummy2: migration-threshold=1000000 fail-count=1000000 last-failure='Thu Mar  3 15:44:34 2022'

Failed Resource Actions:
  * dummy2_start_0 on node1 'error' (1): call=69, status='complete', last-rc-change='Thu Mar  3 15:44:34 2022', queued=0ms, exec=19ms
  * dummy2_start_0 on node2 'error' (1): call=56, status='complete', last-rc-change='Thu Mar  3 15:44:34 2022', queued=0ms, exec=16ms


The scheduler thinks the **entire clone instance** has reached its migration threshold, rather than only the resource (dummy2) that failed within the instance. So it doesn't allow dummy1 to run either.

Mar 03 15:44:34 fastvm-rhel-8-0-23 pacemaker-schedulerd[851482] (unpack_rsc_op_failure)     warning: Unexpected result (error) was recorded for start of dummy2:0 on node1 at Mar  3 15:44:34 2022 | rc=1 id=dummy2_last_failure_0
Mar 03 15:44:34 fastvm-rhel-8-0-23 pacemaker-schedulerd[851482] (unpack_rsc_op_failure)     warning: Unexpected result (error) was recorded for start of dummy2:0 on node1 at Mar  3 15:44:34 2022 | rc=1 id=dummy2_last_0
Mar 03 15:44:34 fastvm-rhel-8-0-23 pacemaker-schedulerd[851482] (unpack_rsc_op_failure)     warning: Unexpected result (error) was recorded for start of dummy2:1 on node2 at Mar  3 15:44:34 2022 | rc=1 id=dummy2_last_failure_0
Mar 03 15:44:34 fastvm-rhel-8-0-23 pacemaker-schedulerd[851482] (unpack_rsc_op_failure)     warning: Unexpected result (error) was recorded for start of dummy2:1 on node2 at Mar  3 15:44:34 2022 | rc=1 id=dummy2_last_0
Mar 03 15:44:34 fastvm-rhel-8-0-23 pacemaker-schedulerd[851482] (pe_get_failcount)  info: dummy2:0 has failed INFINITY times on node1
Mar 03 15:44:34 fastvm-rhel-8-0-23 pacemaker-schedulerd[851482] (pcmk__threshold_reached)   warning: test_grp-clone cannot run on node1 due to reaching migration threshold (clean up resource to allow again)| failures=1000000 migration-threshold=1000000
Mar 03 15:44:34 fastvm-rhel-8-0-23 pacemaker-schedulerd[851482] (pe_get_failcount)  info: dummy2:1 has failed INFINITY times on node1
Mar 03 15:44:34 fastvm-rhel-8-0-23 pacemaker-schedulerd[851482] (pcmk__threshold_reached)   warning: test_grp-clone cannot run on node1 due to reaching migration threshold (clean up resource to allow again)| failures=1000000 migration-threshold=1000000
Mar 03 15:44:34 fastvm-rhel-8-0-23 pacemaker-schedulerd[851482] (pe_get_failcount)  info: dummy2:0 has failed INFINITY times on node2
Mar 03 15:44:34 fastvm-rhel-8-0-23 pacemaker-schedulerd[851482] (pcmk__threshold_reached)   warning: test_grp-clone cannot run on node2 due to reaching migration threshold (clean up resource to allow again)| failures=1000000 migration-threshold=1000000
Mar 03 15:44:34 fastvm-rhel-8-0-23 pacemaker-schedulerd[851482] (pe_get_failcount)  info: dummy2:1 has failed INFINITY times on node2
Mar 03 15:44:34 fastvm-rhel-8-0-23 pacemaker-schedulerd[851482] (pcmk__threshold_reached)   warning: test_grp-clone cannot run on node2 due to reaching migration threshold (clean up resource to allow again)| failures=1000000 migration-threshold=1000000


As a side note, it also thinks that both instances of dummy2 have failed on both nodes, when in reality only one instance has failed on each node. However, this part might be correct since they're anonymous clones, albeit with a slightly misleading message.

-----

Version-Release number of selected component (if applicable):

Current upstream main, and pacemaker-2.0.5-9.el8_4.1.

-----

How reproducible:

Always

-----

Steps to Reproduce:
1. Create a cloned resource group consisting of rsc1 (which will successfully start) and rsc2 (which will fail to start). My approach for rsc2 was to copy ocf:heartbeat:Dummy to ocf:heartbeat:Dummy_fail, and then modify Dummy_fail so that it returns $OCF_ERR_GENERIC for the start operation.

-----

Actual results:

rsc1 starts successfully. Then after rsc2 fails to start, rsc1 stops and remains stopped.

-----

Expected results:

rsc1 is able to run after rsc2 fails to start, just as it does in a non-cloned resource group in this scenario.

Comment 1 Ken Gaillot 2022-03-04 17:37:07 UTC
FYI, ocf:pacemaker:Dummy has a fail_start_on parameter that would allow configuring the clone to fail on one node.

Comment 3 RHEL Program Management 2023-09-22 19:16:29 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 4 RHEL Program Management 2023-09-22 19:18:57 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.