Bug 1382068

Summary: pcs waits forever while disabling redis with constraints defined
Product: Red Hat Enterprise Linux 8 Reporter: Raoul Scarazzini <rscarazz>
Component: pacemakerAssignee: Ken Gaillot <kgaillot>
Status: CLOSED WONTFIX QA Contact: cluster-qe <cluster-qe>
Severity: low Docs Contact:
Priority: low    
Version: 8.2CC: cluster-maint, fdinitto, mnovacek
Target Milestone: rcKeywords: Reopened, Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-06-30 07:30:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Raoul Scarazzini 2016-10-05 16:15:57 UTC
Description of problem:

In OSP7, under an updated RHEL 7.2, if you try to disable redis using this command:

[heat-admin@overcloud-controller-0 ~]$ sudo pcs resource disable redis
[heat-admin@overcloud-controller-0 ~]$ sudo crm_resource -V --wait

you wait forever seeing something like this:

   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:1 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:2 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:1 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:2 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:1 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:2 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:1 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:2 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:1 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:2 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:1 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:2 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:1 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:2 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:1 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:2 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:1 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:2 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:1 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:2 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:1 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:2 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:1 (then)
   error: clone_update_actions_interleave:      Internal error: No action found for demote in redis:2 (then)

The resource have constraints that should make the disable process straight.

A similar problem related to redis happen if you do this test:

1) Stop galera
2) Stop rabbit
3) Stop redis
4) Start galera
5) Start rabbit
6) Start redis

If you use wait on each operation, everything is fine until you try to start rabbit (step 5), then you wait forever (or the timeout you passed to pcs resource enable rabbitmq --wait <TIMEOUT>).

If the sequence is instead this one:

1) Stop galera
2) Stop rabbit
3) Stop redis
4) Start galera
5) Start redis
6) Start rabbit

Then everything ends fine.

Note that the version of Pacemaker used was the one that fixed these two bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1347806
https://bugzilla.redhat.com/show_bug.cgi?id=1349493

Version-Release number of selected component (if applicable):

pacemaker-1.1.13-10.el7_2.4

How reproducible:

Always

Comment 7 Andrew Beekhof 2016-11-08 04:21:57 UTC
If you apply the following patch:

diff --git a/tools/crm_resource_runtime.c b/tools/crm_resource_runtime.c
index 42e8b07..bac9907 100644
--- a/tools/crm_resource_runtime.c
+++ b/tools/crm_resource_runtime.c
@@ -1360,7 +1360,9 @@ actions_are_pending(GListPtr actions)
     GListPtr action;
 
     for (action = actions; action != NULL; action = action->next) {
-        if (action_is_pending((action_t *) action->data)) {
+        action_t *a = (action_t *)action->data;
+        if (action_is_pending(a)) {
+            printf("Found: %s 0x%x\n", a->uuid, a->flags);
             return TRUE;
         }
     }

and run:

CIB_file=sosreport-overcloud-controller-0.localdomain-20161007091508/sos_commands/cluster/crm_report/overcloud-controller-0.localdomain/pe-input-45.bz2 tools/crm_resource --wait  -VV

against the http://file.rdu.redhat.com/~rscarazz/BZ1382068/pacemaker-1.1.15-11.el7/ tarball, then you'll see:

Found: openstack-heat-api-cfn:2_monitor_60000 0x112
Found: openstack-heat-api-cfn:2_monitor_60000 0x112
...


Which indicates that although that openstack-heat-api-cfn:2_start_0 operation is correctly removed from the graph (because its dependancies depend on redis which is not allowed to start), we are still under the impression that the monitor op can run.

This confuses the --wait logic

Comment 8 Ken Gaillot 2017-01-10 22:21:12 UTC
This is unlikely to be addressed in the 7.4 timeframe

Comment 10 Ken Gaillot 2017-10-09 17:28:31 UTC
Due to time constraints, this will not make 7.5

Comment 11 Ken Gaillot 2020-06-03 21:01:01 UTC
Moving to RHEL 8 since this will not be fixed in the 7.9 time frame

Comment 14 Ken Gaillot 2020-10-13 22:02:18 UTC
This issue is still present in the latest upstream pacemaker and is still a high priority. However new policy prevents us from keeping this report open, so an upstream bug has been filed for the issue, and this report will be closed. It will be reopened when developer time becomes available to address it.

Comment 16 RHEL Program Management 2021-06-30 07:30:55 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 17 Ken Gaillot 2021-06-30 14:04:28 UTC
This issue is still a high priority, and when developer time becomes available for it, we will reopen this bz.