Bug 2036815 - Add new multiple-active option for "stop unexpected instances"
Summary: Add new multiple-active option for "stop unexpected instances"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: pacemaker
Version: 8.5
Hardware: All
OS: Linux
urgent
urgent
Target Milestone: rc
: 8.7
Assignee: Ken Gaillot
QA Contact: cluster-qe@redhat.com
Steven J. Levine
URL:
Whiteboard:
Depends On:
Blocks: 2041747 2062848 2062850
TreeView+ depends on / blocked
 
Reported: 2022-01-04 05:29 UTC by Reid Wahl
Modified: 2022-11-08 10:39 UTC (History)
8 users (show)

Fixed In Version: pacemaker-2.1.3-1.el8
Doc Type: Enhancement
Doc Text:
.The `multiple-active` resource parameter now accepts a value of `stop_unexpected` The `multiple-active` resource parameter determines recovery behavior when a resource is active on more than one node when it should not be. By default, this situation requires a full restart of the resource, even if the resource is running successfully where it should be. With this update, the `multiple-active` resource parameter accepts a value of `stop_unexpected`, which allows you to specify that only unexpected instances of a multiply-active resource are stopped. It is the user's responsibility to verify that the service and its resource agent can function with extra active instances without requiring a full restart.
Clone Of:
: 2041747 2062848 2062850 (view as bug list)
Environment:
Last Closed: 2022-11-08 09:42:25 UTC
Type: Bug
Target Upstream Version:


Attachments (Terms of Use)
pe-input file for simulating the probe issue (34.96 KB, text/plain)
2022-01-04 05:29 UTC, Reid Wahl
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-106773 0 None None None 2022-01-04 05:33:38 UTC
Red Hat Knowledge Base (Solution) 1309133 0 None None None 2022-06-27 16:45:02 UTC
Red Hat Product Errata RHBA-2022:7573 0 None None None 2022-11-08 09:42:42 UTC

Description Reid Wahl 2022-01-04 05:29:48 UTC
Created attachment 1848767 [details]
pe-input file for simulating the probe issue

Description of problem:

When a node (re)joins the cluster and a bundle replica is scheduled to start on that node, probes run for all of the following resources on that node:
  - all bundle replica wrappers **other than** the one that's scheduled to start on that node
  - all bundle replica container resources


To put this into a concrete example, let's say node control0001-naz91 has just joined, and nodes control0001-cdm and control0001-lb are already up and healthy. Then we might see a transition like the following. (I've elided the non-galera resources and actions.)


[root@fastvm-rhel-8-0-23 pacemaker]# crm_simulate -Sx /tmp/pe-input-test 
Using the original execution date of: 2021-12-27 04:36:40Z
Current cluster status:
  * Node List:
    * Online: [ control0001-cdm control0001-lb control0001-naz91 ]
    * GuestOnline: [ galera-bundle-1@control0001-cdm galera-bundle-2@control0001-lb ]

  * Full List of Resources:
    * Container bundle set: galera-bundle [cluster.common.tag/banca_d_italia_sddc-osp16_containers-mariadb:pcmklatest]:
      * galera-bundle-0	(ocf:heartbeat:galera):	 Stopped
      * galera-bundle-1	(ocf:heartbeat:galera):	 Promoted control0001-cdm
      * galera-bundle-2	(ocf:heartbeat:galera):	 Promoted control0001-lb

Transition Summary:
  * Start      galera-bundle-podman-0                 (                   control0001-naz91 )
  * Start      galera-bundle-0                        (                   control0001-naz91 )
  * Start      galera:0                               (                     galera-bundle-0 )

Executing Cluster Transition:
  * Resource action: galera-bundle-podman-0 monitor on control0001-naz91
  * Resource action: galera-bundle-podman-1 monitor on control0001-naz91
  * Resource action: galera-bundle-1 monitor on control0001-naz91
  * Resource action: galera-bundle-podman-2 monitor on control0001-naz91
  * Resource action: galera-bundle-2 monitor on control0001-naz91
  * Pseudo action:   galera-bundle_start_0
  * Pseudo action:   galera-bundle-master_start_0
  * Resource action: galera-bundle-podman-0 start on control0001-naz91
  * Resource action: galera-bundle-0 monitor on control0001-naz91
  * Resource action: galera-bundle-podman-0 monitor=60000 on control0001-naz91
  * Resource action: galera-bundle-0 start on control0001-naz91
  * Resource action: galera          start on galera-bundle-0
  * Pseudo action:   galera-bundle-master_running_0
  * Resource action: galera-bundle-0 monitor=30000 on control0001-naz91
  * Pseudo action:   galera-bundle_running_0
  * Resource action: galera          monitor=30000 on galera-bundle-0
  * Resource action: galera          monitor=20000 on galera-bundle-0
Using the original execution date of: 2021-12-27 04:36:40Z

Revised Cluster Status:
  * Node List:
    * Online: [ control0001-cdm control0001-lb control0001-naz91 ]
    * GuestOnline: [ galera-bundle-0@control0001-naz91 galera-bundle-1@control0001-cdm galera-bundle-2@control0001-lb ]

  * Full List of Resources:
    * Container bundle set: galera-bundle [cluster.common.tag/banca_d_italia_sddc-osp16_containers-mariadb:pcmklatest]:
      * galera-bundle-0	(ocf:heartbeat:galera):	 Unpromoted control0001-naz91
      * galera-bundle-1	(ocf:heartbeat:galera):	 Promoted control0001-cdm
      * galera-bundle-2	(ocf:heartbeat:galera):	 Promoted control0001-lb


Notes:
  - Probes for galera-bundle-podman-{0,1,2} and galera-bundle-{1,2} are scheduled on node control0001-naz91.
  - The plan is to start galera-bundle-0/galera-bundle-podman-0 there.
  - galera-bundle-podman-{1,2} and galera-bundle-{1,2} are already running on nodes control0001-cdm and control0001-lb, respectively.


The problem: If the probes of replicas 1 and 2 fail (e.g., if they time out) instead of returning OCF_SUCCESS or OCF_NOT_RUNNING, then this causes pacemaker to view the healthy replicas as FAILED. It triggers recovery on the healthy nodes.


In the support case where this issue appeared, there was a problem on node control0001-naz91. The two wrapper probes (galera-bundle-{1,2}) succeeded. But all three container probes (galera-bundle-podman-{0,1,2}) timed out after 120s on control0001-naz91, returning OCF_ERR_GENERIC.

As a result, the **healthy** galera-bundle-{1,2} resources running on nodes cdm and lb went into FAILED state. There were "active on 2 nodes" errors, and pacemaker initiated a recovery sequence. This caused an extended outage for the customer and required manual recovery because of the way the galera resource agent operates. (Once a galera resource enters unpromoted state, it can't promote until it receives data from all nodes; but one node was unhealthy and couldn't retrieve the necessary data.)


Is each bundle replica unique? If not, then it seems desirable to treat them like anonymous clones with regard to scheduling probes -- that is, scheduling a probe for only one replica.

-----

Version-Release number of selected component (if applicable):

pacemaker-2.0.5-9.el8_4.3 (reporting customer)

Reproducible with current main.

-----

How reproducible:

Always

-----

Steps to Reproduce:

Run `crm_simulate -Sx pe-input-minimal` on the pe-input-minimal file that I'm going to attach. Arguably easier than building a real bundle resource.

-----

Actual results:

  * Resource action: galera-bundle-podman-0 monitor on control0001-naz91
  * Resource action: galera-bundle-podman-1 monitor on control0001-naz91
  * Resource action: galera-bundle-1 monitor on control0001-naz91
  * Resource action: galera-bundle-podman-2 monitor on control0001-naz91
  * Resource action: galera-bundle-2 monitor on control0001-naz91

-----

Expected results:

  * Resource action: galera-bundle-podman-0 monitor on control0001-naz91

and maybe

  * Resource action: galera-bundle-1 monitor on control0001-naz91

Comment 2 Ken Gaillot 2022-01-04 20:53:23 UTC
Whenever any node joins, all resources in the cluster must be probed on that node, even resources known to be active elsewhere, so the state of the node is known conclusively. Basically we don't trust any node that leaves, we want to be sure it didn't mistakenly start anything before rejoining.

Bundle replicas are comparable to unique clones, even if replicas-per-host is 1. The service inside the container isn't relevant at that point, only the container itself, and a node can run any number of containers. Again the goal of the probe is to ensure that the node didn't mistakenly start one of the replicas while it was gone.

Once the probes timed out, the cluster had to assume the service being probed might be active there, so it then had to proceed with "multiply active" recovery. By default that involves stopping all instances and then starting the intended one (controllable by the multiple-active resource meta-attribute, the other options being stopping it everywhere or unmanaging it).

I could imagine a new option for multiple-active that would leave the intended instance active (if it is) and stop any other instances. I think that's not the default because some services get confused if multiple instances are active, and need a reset. We could use this bz as an RFE for that.

Alternatively, maybe Pacemaker should handle failed probes (vs simply an unexpected result) differently. Maybe there should be a number of retries, or a failed probe should be recovered with a stop instead of considering the resource failed. For example: a probe finds a second instance running in an unexpected location -> multiple-active recovery; a probe returns a result other than running or not running -> stop the resource then proceed as if the probe found it not running.

Comment 3 Reid Wahl 2022-01-04 22:49:02 UTC
(In reply to Ken Gaillot from comment #2)
> Bundle replicas are comparable to unique clones, even if replicas-per-host
> is 1. The service inside the container isn't relevant at that point, only
> the container itself, and a node can run any number of containers.

This is my main point of confusion. I assumed the container instances for a bundle were basically anonymous clones of one another, so there would be no need to check for **particular** instances of a given container as we see happening here. I assumed that checking for **any** instance of that container (on a node where zero instances are expected to be running) would suffice.

If that assumption is incorrect, then I still don't fully understand, but I'll take your word for it if it's necessary to treat each container as unique for the purpose of probes. My docker experience is limited and my podman experience is zero. And bundles are more complex than other resource variants in general.


> Once the probes timed out, the cluster had to assume the service being
> probed might be active there, so it then had to proceed with "multiply
> active" recovery. By default that involves stopping all instances and then
> starting the intended one (controllable by the multiple-active resource
> meta-attribute, the other options being stopping it everywhere or unmanaging
> it).

In the context that the probe failures have already occurred, the "multiply active" recovery behavior makes sense (even if it's problematic in cases like this and we want to use this BZ as an RFE).


> Alternatively, maybe Pacemaker should handle failed probes (vs simply an
> unexpected result) differently. Maybe there should be a number of retries,

In this particular scenario, retries likely wouldn't have helped. Node naz91 had some serious issues.


> or a failed probe should be recovered with a stop instead of considering the
> resource failed. For example: a probe finds a second instance running in an
> unexpected location -> multiple-active recovery; a probe returns a result
> other than running or not running -> stop the resource then proceed as if
> the probe found it not running.

That sounds reasonable to me, acknowledging the possibility of unintended consequences that might pop up.

Comment 4 Ken Gaillot 2022-01-04 23:43:43 UTC
(In reply to Reid Wahl from comment #3)
> (In reply to Ken Gaillot from comment #2)
> > Bundle replicas are comparable to unique clones, even if replicas-per-host
> > is 1. The service inside the container isn't relevant at that point, only
> > the container itself, and a node can run any number of containers.
> 
> This is my main point of confusion. I assumed the container instances for a
> bundle were basically anonymous clones of one another, so there would be no
> need to check for **particular** instances of a given container as we see
> happening here. I assumed that checking for **any** instance of that
> container (on a node where zero instances are expected to be running) would
> suffice.

The underlying container technology has no concept of clones -- each container has a unique ID that the container technology knows it by. From Pacemaker's view, even if the bundle replicas behave like an anonymous clone, the container resource for each replica is a unique primitive with a separate name that must be passed to the agent when probing it.

Basically, for a monitor the container agents are checking whether a particular container name is registered and active. So, checking one isn't sufficient for the others, even if they are all identical containers running an identical service inside.

> > Alternatively, maybe Pacemaker should handle failed probes (vs simply an
> > unexpected result) differently. Maybe there should be a number of retries,
> 
> In this particular scenario, retries likely wouldn't have helped. Node naz91
> had some serious issues.

Yeah, that might not be the best approach

> > or a failed probe should be recovered with a stop instead of considering the
> > resource failed. For example: a probe finds a second instance running in an
> > unexpected location -> multiple-active recovery; a probe returns a result
> > other than running or not running -> stop the resource then proceed as if
> > the probe found it not running.
> 
> That sounds reasonable to me, acknowledging the possibility of unintended
> consequences that might pop up.

We can use this bz for that

Comment 9 Ken Gaillot 2022-02-02 18:01:14 UTC
Getting deeper into this, I'm wondering if the best course of action might be to use the multiple-active="block" cluster property. With that option set, in a case like this, the cluster would leave the existing instances running, but would not start or stop them under any circumstance until an administrator investigated and cleared the error.

The issue is that the current default behavior was chosen as the safest approach to a situation like this. In this case, the service was not running on the node with the problems, but the cluster could not confirm that, and so had to assume that it might be active. For many services, having an unexpected instance running can cause problems for existing instances, and a full restart is required to be sure they are running properly, and not left active in a degraded state.

Ideally, we could rely on the monitor action to detect when the other instances are degraded, but given the great diversity of resource agents, that could expose a lot of agent bugs.

The originally proposed fix (issuing a stop command for failed probes) would not be able to be done unconditionally because of the above issue. We could potentially make it a resource option, but I'm not sure users would fully understand the consequences. I no longer think it's worthwhile to distinguish "definitely multiply active" from "possibly multiply active" -- if recovery is required when multiply active is definite, then safety requires it even when it's a maybe.

I see our main choices for this bz being:

* Use the existing multiple-active="block", requiring the administrator to investigate cases like this.

* Add a new multiple-active value for "stop unexpected instances only", requiring the administrator to understand the risks and confirm that either the services they're running either won't be affected by an unexpected instance, or the monitor actions of resource agents in use can reliably detect any issues that would be caused.

Opinions?

Comment 10 Reid Wahl 2022-02-02 20:57:50 UTC
(In reply to Ken Gaillot from comment #9)

As a quick summary for my later reference, since this was giving me a minor headache until coffee kicked in... It sounds like the issue with the original idea is twofold:

1. A probe return code other than 0 (running) or 7 (not running) doesn't tell us with certainty *in general* whether the resource is actually running or to what extent/in what way it's degraded.
2. If a probe doesn't tell us definitively that a resource instance is running or not running, then it's not safe *in general* to simply stop instances that are in unexpected state without applying the multiple-active behavior. Depending on the particular resource agent and use case, safe recovery may require a full stop across all instances.

Makes sense to me.


I'm generally averse to requiring manual intervention in an HA cluster, so my knee-jerk response is to add the "stop unexpected instances only" option and have pcs give the user a warning prompt if they try to configure it. Tell them the risks/caveats and suggest using "block".

Comment 11 Riccardo Bruzzone 2022-02-11 09:34:13 UTC
Hi Reid
The "stop unexpected instances only" together with a warning in the PCS status makes sense also from my point of view.
Applying this approach to the event described in this Bugzilla, what would the result be ?

BR
Riccardo

Comment 12 Ken Gaillot 2022-02-11 16:14:24 UTC
(In reply to Riccardo Bruzzone from comment #11)
> Hi Reid
> The "stop unexpected instances only" together with a warning in the PCS
> status makes sense also from my point of view.
> Applying this approach to the event described in this Bugzilla, what would
> the result be ?
> 
> BR
> Riccardo

When the problematic node rejoined the cluster, and the probes failed, the cluster would still consider the healthy bundle instances to be multiply active. At that point, the cluster would determine where those instances should and shouldn't be, and issue stop actions for the problematic node only. Those stops would likely fail, leading to that node getting fenced.

Comment 17 Ken Gaillot 2022-03-30 22:08:14 UTC
Reid and Riccardo,

There is one aspect of the behavior that is an open question.

If a resource has multiple-active=stop_unexpected, and is active on multiple nodes, it will be left running on its expected node and stopped anywhere else it is active.

But what should happen to dependent resources? If some other resource is ordered after the multiply active resource, or listed after it in a group, it would normally be restarted in this situation, because it must be stopped before the primary resource can be stopped (even though the primary resource is stopping on only the unexpected nodes). This seems like the safest approach to me, but someone who uses stop_unexpected might not expect the dependent resources to restart.

We could possibly let this behavior be controlled by whether the dependent resource also has multiple-active=stop_unexpected, but that would add a layer of complication to both user understanding and the implementation.

Comment 18 Reid Wahl 2022-03-30 23:10:32 UTC
(In reply to Ken Gaillot from comment #17)
> This seems like the safest
> approach to me, but someone who uses stop_unexpected might not expect the
> dependent resources to restart.
> 
> We could possibly let this behavior be controlled by whether the dependent
> resource also has multiple-active=stop_unexpected, but that would add a
> layer of complication to both user understanding and the implementation.

Yeah, that is a tough call. I believe the common case would be to expect the dependent resources **not** to be affected on any node where the primary resource won't be stopped. In other words, if the primary resource isn't stopping on an expected node, don't stop the dependent resource on that expected node. I think most users who have dependent resources will be surprised when their dependent resources restart despite the expected multiply active resource continuing to run.

On the other hand, changing the constraint handling in this scenario adds implementation complexity, making the scheduler even harder to follow. I'm not sure the return justifies the investment. Bundles in OSP clusters will probably be the most common use case for multiple-active=stop_unexpected, and I don't think there are typically any dependent resources.

I'm less concerned about user understanding, because I think using this as the default behavior would be the common case. Users who do want the dependent resources to restart would (I expect) be less common, and they could be directed to configure their resources accordingly.

Of course I'm speculating based on more general observations about user expectations.

There likely will be some users who desire the restart. I can imagine scenarios where the dependent resource must restart everywhere if there's some change in the landscape of multiply active resources. No specifics come to mind but we've seen similar application demands.

Comment 19 Reid Wahl 2022-03-31 01:05:03 UTC
(In reply to Reid Wahl from comment #18)
> (In reply to Ken Gaillot from comment #17)
> > We could possibly let this behavior be controlled by whether the dependent
> > resource also has multiple-active=stop_unexpected, but that would add a
> > layer of complication to both user understanding and the implementation.

I realized I slightly misread this, which colored my response a bit. Most of my comment stands.

If we make the behavior configurable, then I lean toward the default behavior being "don't restart the dependent resources", if that's feasible to implement. Seems the most intuitive to the most users.

Comment 20 Ken Gaillot 2022-03-31 15:32:06 UTC
(In reply to Reid Wahl from comment #18)
> (In reply to Ken Gaillot from comment #17)
> > This seems like the safest
> > approach to me, but someone who uses stop_unexpected might not expect the
> > dependent resources to restart.
> > 
> > We could possibly let this behavior be controlled by whether the dependent
> > resource also has multiple-active=stop_unexpected, but that would add a
> > layer of complication to both user understanding and the implementation.
> 
> Yeah, that is a tough call. I believe the common case would be to expect the
> dependent resources **not** to be affected on any node where the primary
> resource won't be stopped. In other words, if the primary resource isn't
> stopping on an expected node, don't stop the dependent resource on that
> expected node. I think most users who have dependent resources will be
> surprised when their dependent resources restart despite the expected
> multiply active resource continuing to run.

I do think that's most intuitive, unfortunately it's neither the safest approach nor applicable in all situations. Consider an ordering without a colocation -- does it still make sense to leave the dependent resource running?

The problematic case would be a dependent resource that does some sort of discovery to find a server to connect to, and it may have connected to the unexpected instance. Stopping the dependent is the safest approach. Given an ordering "start A then start B", the reverse "stop B then stop A" is implied (by default), which suggests it may not be safe to stop (any instance of) A while B is active.

Throw in clones and bundles and there are even more corner cases. (Interleaved? Unique?)

Restarting dependents limits the usefulness of stop_unexpected, but skipping the restarts means that users have to really understand all the ramifications of the option in their particular configuration, taking into account the interactions of multiple resource agents. And there's no easy way for a user to know that an agent would be OK with the behavior. It makes me question whether the option is even a good idea. At least if we restart dependents, it can be reasonably safe, even if more limited.

Comment 21 Riccardo Bruzzone 2022-04-07 07:01:13 UTC
(In reply to Ken Gaillot from comment #20)

> Restarting dependents limits the usefulness of stop_unexpected, but skipping
> the restarts means that users have to really understand all the
> ramifications of the option in their particular configuration, taking into
> account the interactions of multiple resource agents. And there's no easy
> way for a user to know that an agent would be OK with the behavior. It makes
> me question whether the option is even a good idea. At least if we restart
> dependents, it can be reasonably safe, even if more limited.

Restart dependents probably is the easier approach to be followed taking in account that many Customers don't know very deeply the possible ramifications of their configuration.
This approach could be managed as a configuration option to be defined during the OSP installation or any OSP redeployment.
For example, this feature could be disabled by Customers expert on this topic or enabled from any Customer that is looking for a more easier and safer approach.

Comment 22 Ken Gaillot 2022-04-08 17:03:00 UTC
Feature merged in upstream main branch as of commit 0e4e17e97

As discussed, any resources ordered after the multiply active resource will still need to be restarted. There is no configuration option to change that behavior; adding one would complicate both the implementation and the option understandability, so it will be considered only if user interest arises.

Comment 23 Ken Gaillot 2022-04-14 16:39:02 UTC
A generic reproducer is:

* Create a cluster with at least 2 nodes.
* Create any resource e.g. pcs resource create test ocf:pacemaker:Dummy meta multiple-active=stop_unexpected
* Once the resource is active, manually make it active on some other node e.g. touch /run/Dummy-test.state
* Refresh to get probes run again e.g. pcs resource refresh

Without multiple-active=stop_unexpected, the resource should be stopped on both nodes and restarted on its original node; with stop_unexpected, it should be stopped on the extra node, with no stop or start on the original node.

Comment 25 Riccardo Bruzzone 2022-05-13 08:56:08 UTC
Hello,
The Customer is asking a progress on this Bugzilla.
About this request, could you help me to understand when this FIX will be available in RHEL 8.4 ? 
Is this FIX planned in November (Comment 8) ?

Thank you so much.
BR
Riccardo

Comment 26 Ken Gaillot 2022-05-13 14:39:11 UTC
(In reply to Riccardo Bruzzone from comment #25)
> Hello,
> The Customer is asking a progress on this Bugzilla.
> About this request, could you help me to understand when this FIX will be
> available in RHEL 8.4 ? 
> Is this FIX planned in November (Comment 8) ?
> 
> Thank you so much.
> BR
> Riccardo

Hi,

The 8.4 z-stream is being tracked as Bug 2062850. It should make the next z-stream batch, which I believe is expected around the end of this month. This is a new configuration option, so users will need to modify their cluster configuration to take advantage of it (presumably OpenStack will incorporate it transparently to users, but I don't know their planned schedule for that).

Comment 30 Markéta Smazová 2022-06-23 16:33:20 UTC
>   [root@virt-550 ~]# rpm -q pacemaker
>   pacemaker-2.1.3-2.el8.x86_64

Setup resource test with meta attribute `multiple-active=stop_unexpected`:

>   [root@virt-550 ~]# pcs resource config test
>   Resource: test (class=ocf provider=pacemaker type=Dummy)
>     Meta Attributes: test-meta_attributes
>       multiple-active=stop_unexpected
>     Operations:
>       migrate_from: test-migrate_from-interval-0s
>         interval=0s
>         timeout=20s
>       migrate_to: test-migrate_to-interval-0s
>         interval=0s
>         timeout=20s
>       monitor: test-monitor-interval-10s
>         interval=10s
>         timeout=20s
>       reload: test-reload-interval-0s
>         interval=0s
>         timeout=20s
>       reload-agent: test-reload-agent-interval-0s
>         interval=0s
>         timeout=20s
>       start: test-start-interval-0s
>         interval=0s
>         timeout=20s
>       stop: test-stop-interval-0s
>         interval=0s
>         timeout=20s

>   [root@virt-550 ~]# pcs resource
>     * test	(ocf::pacemaker:Dummy):	 Started virt-550


Make it active manually on the other node virt-551:

>   [root@virt-551 ~]# pcs resource debug-start test
>   Operation force-start for test (ocf:pacemaker:Dummy) returned 0 (ok)
>   [root@virt-551 ~]# pcs resource debug-monitor test
>   Operation force-check for test (ocf:pacemaker:Dummy) returned 0 (ok)

Refresh to get probes run again:

>   [root@virt-551 ~]# pcs resource refresh
>   Waiting for 1 reply from the controller
>   ... got reply (done)
>   [root@virt-551 ~]# pcs resource
>     * test	(ocf::pacemaker:Dummy):	 Started virt-550



Resource "test" is stopped on the other node virt-551:

log from node virt-551:
>   Jun 23 17:52:19 virt-551 pacemaker-controld[52195]: notice: Forcing the status of all resources to be redetected
>   Jun 23 17:52:19 virt-551 pacemaker-controld[52195]: warning: new_event_notification (/dev/shm/qb-52195-58165-6-PR3Ddb/qb): Broken pipe (32)
>   Jun 23 17:52:19 virt-551 pacemaker-controld[52195]: notice: State transition S_IDLE -> S_POLICY_ENGINE
>   Jun 23 17:52:19 virt-551 pacemaker-schedulerd[52194]: notice: Actions: Start      test               ( virt-550 )
>   Jun 23 17:52:19 virt-551 pacemaker-schedulerd[52194]: notice: Calculated transition 9, saving inputs in /var/lib/pacemaker/pengine/pe-input-7.bz2
>   Jun 23 17:52:19 virt-551 pacemaker-schedulerd[52194]: notice: Actions: Start      test               ( virt-550 )
>   Jun 23 17:52:19 virt-551 pacemaker-schedulerd[52194]: notice: Calculated transition 10, saving inputs in /var/lib/pacemaker/pengine/pe-input-7.bz2
>   Jun 23 17:52:19 virt-551 pacemaker-controld[52195]: notice: Initiating monitor operation test_monitor_0 locally on virt-551
>   Jun 23 17:52:19 virt-551 pacemaker-controld[52195]: notice: Requesting local execution of probe operation for test on virt-551
>   Jun 23 17:52:19 virt-551 pacemaker-controld[52195]: notice: Initiating monitor operation test_monitor_0 on virt-550
>   Jun 23 17:52:20 virt-551 pacemaker-controld[52195]: notice: Transition 10 aborted by operation test_monitor_0 'modify' on virt-550: Event failed
>   Jun 23 17:52:20 virt-551 pacemaker-controld[52195]: notice: Transition 10 action 3 (test_monitor_0 on virt-550): expected 'not running' but got 'ok'
>   Jun 23 17:52:20 virt-551 pacemaker-controld[52195]: notice: Result of probe operation for test on virt-551: ok
>   Jun 23 17:52:20 virt-551 pacemaker-controld[52195]: notice: Transition 10 action 6 (test_monitor_0 on virt-551): expected 'not running' but got 'ok'
>   Jun 23 17:52:20 virt-551 pacemaker-controld[52195]: notice: Transition 10 (Complete=8, Pending=0, Fired=0, Skipped=2, Incomplete=4, Source=/var/lib/pacemaker/pengine/pe-input-7.bz2): Stopped
>   Jun 23 17:52:20 virt-551 pacemaker-schedulerd[52194]: error: ocf resource test might be active on 2 nodes (stopping unexpected instances)
>   Jun 23 17:52:20 virt-551 pacemaker-schedulerd[52194]: notice: See https://wiki.clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information
>   Jun 23 17:52:20 virt-551 pacemaker-schedulerd[52194]: notice: Actions: Restart    test               ( virt-550 )
>   Jun 23 17:52:20 virt-551 pacemaker-schedulerd[52194]: error: Calculated transition 11 (with errors), saving inputs in /var/lib/pacemaker/pengine/pe-error-1.bz2
>   Jun 23 17:52:20 virt-551 pacemaker-controld[52195]: notice: Initiating stop operation test_stop_0 locally on virt-551
>   Jun 23 17:52:20 virt-551 pacemaker-controld[52195]: notice: Requesting local execution of stop operation for test on virt-551
>   Jun 23 17:52:20 virt-551 pacemaker-controld[52195]: notice: Result of stop operation for test on virt-551: ok
>   Jun 23 17:52:20 virt-551 pacemaker-controld[52195]: notice: Initiating monitor operation test_monitor_10000 on virt-550
>   Jun 23 17:52:20 virt-551 pacemaker-controld[52195]: notice: Transition 11 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-error-1.bz2): Complete
>   Jun 23 17:52:20 virt-551 pacemaker-controld[52195]: notice: State transition S_TRANSITION_ENGINE -> S_IDLE

Resource did not restart on the original node virt-550:

log from node virt-550:
>   Jun 23 17:52:19 virt-550 pacemaker-controld[52239]: notice: Forcing the status of all resources to be redetected
>   Jun 23 17:52:19 virt-550 pacemaker-controld[52239]: notice: Requesting local execution of probe operation for test on virt-550
>   Jun 23 17:52:20 virt-550 pacemaker-controld[52239]: notice: Result of probe operation for test on virt-550: ok
>   Jun 23 17:52:20 virt-550 pacemaker-controld[52239]: notice: Requesting local execution of monitor operation for test on virt-550
>   Jun 23 17:52:20 virt-550 pacemaker-controld[52239]: notice: Result of monitor operation for test on virt-550: ok



marking verified in pacemaker-2.1.3-2.el8

Comment 37 errata-xmlrpc 2022-11-08 09:42:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (pacemaker bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:7573


Note You need to log in before you can comment on or make changes to this bug.