Description of problem: Pacemaker should allow moving and banning bundle resources with one instance (replicas=1) in the same way it allows moving and banning clone resources with one instance (clone-max=1). Version-Release number of selected component (if applicable): pacemaker-1.1.16-12.el7.x86_64 How reproducible: always Steps to Reproduce: moving without specifying a target node --------------------------------------- DummyBundle is a bundle with replicas=1: [root@virt-143 ~]# crm_resource --resource DummyBundle --move Resource 'DummyBundle' not moved: active in 2 locations. You can prevent 'DummyBundle' from running on a specific location with: --ban --node <name> Error performing operation: Invalid argument > crm_resource gives us misinformation about the number of locations being active. There should only be one active due to replicas=1. dummy1-clone is a clone with clone-max=1: [root@virt-143 ~]# crm_resource --resource dummy1-clone --move WARNING: Creating rsc_location constraint 'cli-ban-dummy1-clone-on-virt-143' with a score of -INFINITY for resource dummy1-clone on virt-143. This will prevent dummy1-clone from running on virt-143 until the constraint is removed using the 'crm_resource --clear' command or manually with cibadmin This will be the case even if virt-143 is the last node in the cluster This message can be disabled with --quiet > When targeting clone ID, a clone with single instance passed and WAS moved with crm_resource. dummy1-clone is a clone with clone-max=2: [root@virt-143 ~]# crm_resource --resource dummy1-clone --move Resource 'dummy1-clone' not moved: active in 2 locations. You can prevent 'dummy1-clone' from running on a specific location with: --ban --node <name> Error performing operation: Invalid argument moving with specifying a target node ------------------------------------ DummyBundle is a bundle with replicas=1: [root@virt-143 ~]# crm_resource --resource DummyBundle --move --node virt-145 Resource 'DummyBundle' not moved: active on multiple nodes Error performing operation: Invalid argument > Just like in the previous case the crm_resource gives us a misinformation about the number of locations being active. There should only be one active due to replicas=1. dummy1-clone is a clone with clone-max=1: [root@virt-143 ~]# crm_resource --resource dummy1-clone --move --node virt-146 [root@virt-143 ~]# pcs constraint --full Location Constraints: Resource: dummy1-clone Enabled on: virt-146 (score:INFINITY) (role: Started) (id:cli-prefer-dummy1-clone) > When targeting clone ID, a clone with single instance passed and WAS moved with crm_resource. dummy1-clone is a clone with clone-max=2: [root@virt-143 ~]# crm_resource --resource dummy1-clone --move --node virt-146 [root@virt-143 ~]# pcs constraint --full Location Constraints: Resource: dummy1-clone Enabled on: virt-146 (score:INFINITY) (role: Started) (id:cli-prefer-dummy1-clone) banning ------- DummyBundle is a bundle with replicas=1: [root@virt-143 ~]# crm_resource --resource DummyBundle --ban Resource 'DummyBundle' not moved: active in 2 locations. You can prevent 'DummyBundle' from running on a specific location with: --ban --node <name> Error performing operation: Invalid argument > Just like in the previous case the crm_resource gives us a misinformation about the number of locations being active. There should only be one active due to replicas=1. dummy1-clone is a clone with clone-max=1: [root@virt-143 ~]# crm_resource --resource dummy1-clone --ban WARNING: Creating rsc_location constraint 'cli-ban-dummy1-clone-on-virt-144' with a score of -INFINITY for resource dummy1-clone on virt-144. This will prevent dummy1-clone from running on virt-144 until the constraint is removed using the 'crm_resource --clear' command or manually with cibadmin This will be the case even if virt-144 is the last node in the cluster This message can be disabled with --quiet [root@virt-143 ~]# pcs constraint --full Location Constraints: Resource: dummy1-clone Disabled on: virt-144 (score:-INFINITY) (role: Started) (id:cli-ban-dummy1-clone-on-virt-144) > When targeting clone ID, a clone with single instance passed and WAS moved with crm_resource. dummy1-clone is a clone with clone-max=2: [root@virt-143 ~]# crm_resource --resource dummy1-clone --ban Resource 'dummy1-clone' not moved: active in 2 locations. You can prevent 'dummy1-clone' from running on a specific location with: --ban --node <name> Error performing operation: Invalid argument
Moving to RHEL 8 only
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.
FYI, the problem here is that Pacemaker considers the overall bundle instance to be active on 2 (or potentially even 3) nodes because its implicit resources are active on different nodes. For example the instance's container could be on node1, its remote connection could be on node2, and its containerized primitive is considered to be running on the guest node created by starting the container. It should be reasonably straightforward to special-case bundles when counting active instances, to count only containers, not remote connections and containerized primitives. Location constraints for a bundle apply only to its containers, so it makes sense to count only them for this purpose.
Fixed in upstream main branch as of commit 905dda99
after fix -------- > [root@virt-502 ~]# rpm -q pacemaker > pacemaker-2.1.6-1.el8.x86_64 Setup cluster and a bundle resource with one instance (replicas=1): > [root@virt-502 ~]# pcs status > Cluster name: STSRHTS287 > Cluster Summary: > * Stack: corosync (Pacemaker is running) > * Current DC: virt-502 (version 2.1.6-1.el8-6fdc9deea29) - partition with quorum > * Last updated: Mon Jun 19 16:02:02 2023 on virt-502 > * Last change: Mon Jun 19 16:01:58 2023 by root via cibadmin on virt-502 > * 2 nodes configured > * 4 resource instances configured > Node List: > * Online: [ virt-502 virt-503 ] > Full List of Resources: > * fence-virt-502 (stonith:fence_xvm): Started virt-502 > * fence-virt-503 (stonith:fence_xvm): Started virt-503 > * Container bundle: TestBundle1 [redis:test]: > * TestBundle1-podman-0 (127.0.0.2) (ocf::heartbeat:podman): Started virt-502 > Daemon Status: > corosync: active/enabled > pacemaker: active/enabled > pcsd: active/enabled > [root@virt-502 ~]# pcs resource config > Bundle: TestBundle1 > Podman: image=redis:test replicas=1 > Network: ip-range-start=127.0.0.2 host-interface=lo host-netmask=8 > Port Mapping: > port=80 (httpd-port) Move bundle resource without specifying a target node: > [root@virt-502 ~]# crm_resource --resource TestBundle1 --move > WARNING: Creating rsc_location constraint 'cli-ban-TestBundle1-on-virt-502' with a score of -INFINITY for resource TestBundle1 on virt-502. > This will prevent TestBundle1 from running on virt-502 until the constraint is removed using the clear option or by editing the CIB with an appropriate tool > This will be the case even if virt-502 is the last node in the cluster > [root@virt-502 ~]# pcs resource > * Container bundle: TestBundle1 [redis:test]: > * TestBundle1-podman-0 (127.0.0.2) (ocf::heartbeat:podman): Started virt-503 > [root@virt-502 ~]# pcs constraint > Location Constraints: > Resource: TestBundle1 > Disabled on: > Node: virt-502 (score:-INFINITY) (role:Started) > Ordering Constraints: > Colocation Constraints: > Ticket Constraints: RESULT: OK, bundle was moved to another node. Move bundle resource and specify a target node: > [root@virt-502 ~]# crm_resource --resource TestBundle1 --move --node virt-502 > [root@virt-502 ~]# pcs resource > * Container bundle: TestBundle1 [redis:test]: > * TestBundle1-podman-0 (127.0.0.2) (ocf::heartbeat:podman): Starting virt-502 > [root@virt-502 ~]# pcs constraint > Location Constraints: > Resource: TestBundle1 > Enabled on: > Node: virt-502 (score:INFINITY) (role:Started) > Ordering Constraints: > Colocation Constraints: > Ticket Constraints: RESULT: OK, bundle was moved to specified node. Ban bundle resource: > [root@virt-502 ~]# crm_resource --resource TestBundle1 --ban > WARNING: Creating rsc_location constraint 'cli-ban-TestBundle1-on-virt-502' with a score of -INFINITY for resource TestBundle1 on virt-502. > This will prevent TestBundle1 from running on virt-502 until the constraint is removed using the clear option or by editing the CIB with an appropriate tool > This will be the case even if virt-502 is the last node in the cluster > [root@virt-502 ~]# pcs resource > * Container bundle: TestBundle1 [redis:test]: > * TestBundle1-podman-0 (127.0.0.2) (ocf::heartbeat:podman): Started virt-503 > [root@virt-502 ~]# pcs constraint > Location Constraints: > Resource: TestBundle1 > Enabled on: > Node: virt-502 (score:INFINITY) (role:Started) > Disabled on: > Node: virt-502 (score:-INFINITY) (role:Started) > Ordering Constraints: > Colocation Constraints: > Ticket Constraints: RESULT: OK, bundle was banned from one of the nodes. marking verified in pacemaker-2.1.6-1.el8.x86_64