Bug 1578820
| Summary: | Fix moving and banning bundles with one instance | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Tomas Jelinek <tojeline> |
| Component: | pacemaker | Assignee: | Ken Gaillot <kgaillot> |
| Status: | CLOSED ERRATA | QA Contact: | cluster-qe <cluster-qe> |
| Severity: | medium | Docs Contact: | Steven J. Levine <slevine> |
| Priority: | low | ||
| Version: | 8.3 | CC: | cluster-maint, kgaillot, msmazova, revijaya, slevine |
| Target Milestone: | rc | Keywords: | Reopened, Triaged |
| Target Release: | 8.9 | Flags: | pm-rhel:
mirror+
|
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | pacemaker-2.1.6-1.el8 | Doc Type: | Bug Fix |
| Doc Text: |
.The `crm_resource` command now allows banning or moving a bundle with only a single active replica
Previously, when the `crm_resource` command checked where a bundle with a single replica was active, the command counted both the node where the container was active and the guest node that was created for the container itself. As a result, the `crm_resource` command would not ban or move a bundle with a single active replica. With this fix, the `crm_resource` command now only counts nodes where a bundle's containers are active when determining the number of active replicas.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-11-14 15:32:34 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | 2.1.6 |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1520665, 1578789, 1621899, 2233771 | ||
|
Description
Tomas Jelinek
2018-05-16 12:30:39 UTC
Moving to RHEL 8 only After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. FYI, the problem here is that Pacemaker considers the overall bundle instance to be active on 2 (or potentially even 3) nodes because its implicit resources are active on different nodes. For example the instance's container could be on node1, its remote connection could be on node2, and its containerized primitive is considered to be running on the guest node created by starting the container. It should be reasonably straightforward to special-case bundles when counting active instances, to count only containers, not remote connections and containerized primitives. Location constraints for a bundle apply only to its containers, so it makes sense to count only them for this purpose. Fixed in upstream main branch as of commit 905dda99 after fix -------- > [root@virt-502 ~]# rpm -q pacemaker > pacemaker-2.1.6-1.el8.x86_64 Setup cluster and a bundle resource with one instance (replicas=1): > [root@virt-502 ~]# pcs status > Cluster name: STSRHTS287 > Cluster Summary: > * Stack: corosync (Pacemaker is running) > * Current DC: virt-502 (version 2.1.6-1.el8-6fdc9deea29) - partition with quorum > * Last updated: Mon Jun 19 16:02:02 2023 on virt-502 > * Last change: Mon Jun 19 16:01:58 2023 by root via cibadmin on virt-502 > * 2 nodes configured > * 4 resource instances configured > Node List: > * Online: [ virt-502 virt-503 ] > Full List of Resources: > * fence-virt-502 (stonith:fence_xvm): Started virt-502 > * fence-virt-503 (stonith:fence_xvm): Started virt-503 > * Container bundle: TestBundle1 [redis:test]: > * TestBundle1-podman-0 (127.0.0.2) (ocf::heartbeat:podman): Started virt-502 > Daemon Status: > corosync: active/enabled > pacemaker: active/enabled > pcsd: active/enabled > [root@virt-502 ~]# pcs resource config > Bundle: TestBundle1 > Podman: image=redis:test replicas=1 > Network: ip-range-start=127.0.0.2 host-interface=lo host-netmask=8 > Port Mapping: > port=80 (httpd-port) Move bundle resource without specifying a target node: > [root@virt-502 ~]# crm_resource --resource TestBundle1 --move > WARNING: Creating rsc_location constraint 'cli-ban-TestBundle1-on-virt-502' with a score of -INFINITY for resource TestBundle1 on virt-502. > This will prevent TestBundle1 from running on virt-502 until the constraint is removed using the clear option or by editing the CIB with an appropriate tool > This will be the case even if virt-502 is the last node in the cluster > [root@virt-502 ~]# pcs resource > * Container bundle: TestBundle1 [redis:test]: > * TestBundle1-podman-0 (127.0.0.2) (ocf::heartbeat:podman): Started virt-503 > [root@virt-502 ~]# pcs constraint > Location Constraints: > Resource: TestBundle1 > Disabled on: > Node: virt-502 (score:-INFINITY) (role:Started) > Ordering Constraints: > Colocation Constraints: > Ticket Constraints: RESULT: OK, bundle was moved to another node. Move bundle resource and specify a target node: > [root@virt-502 ~]# crm_resource --resource TestBundle1 --move --node virt-502 > [root@virt-502 ~]# pcs resource > * Container bundle: TestBundle1 [redis:test]: > * TestBundle1-podman-0 (127.0.0.2) (ocf::heartbeat:podman): Starting virt-502 > [root@virt-502 ~]# pcs constraint > Location Constraints: > Resource: TestBundle1 > Enabled on: > Node: virt-502 (score:INFINITY) (role:Started) > Ordering Constraints: > Colocation Constraints: > Ticket Constraints: RESULT: OK, bundle was moved to specified node. Ban bundle resource: > [root@virt-502 ~]# crm_resource --resource TestBundle1 --ban > WARNING: Creating rsc_location constraint 'cli-ban-TestBundle1-on-virt-502' with a score of -INFINITY for resource TestBundle1 on virt-502. > This will prevent TestBundle1 from running on virt-502 until the constraint is removed using the clear option or by editing the CIB with an appropriate tool > This will be the case even if virt-502 is the last node in the cluster > [root@virt-502 ~]# pcs resource > * Container bundle: TestBundle1 [redis:test]: > * TestBundle1-podman-0 (127.0.0.2) (ocf::heartbeat:podman): Started virt-503 > [root@virt-502 ~]# pcs constraint > Location Constraints: > Resource: TestBundle1 > Enabled on: > Node: virt-502 (score:INFINITY) (role:Started) > Disabled on: > Node: virt-502 (score:-INFINITY) (role:Started) > Ordering Constraints: > Colocation Constraints: > Ticket Constraints: RESULT: OK, bundle was banned from one of the nodes. marking verified in pacemaker-2.1.6-1.el8.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (pacemaker bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2023:6970 |