Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Important: pcs security and bug fix update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2023:2652
DevTestResults: These commands were copied into test_checkpoint_diff.sh: export NODELIST=(r09-02-a r09-02-b) pcs host auth -u hacluster -p $PASSWORD ${NODELIST[*]} pcs cluster setup HACluster ${NODELIST[*]} --start --wait for node in ${NODELIST[*]}; do pcs stonith create fence-1-$node fence_xvm; done for node in ${NODELIST[*]}; do pcs stonith create fence-2-$node fence_xvm; done for node in ${NODELIST[*]}; do pcs stonith level add 1 $node fence-1-$node; pcs stonith level add 2 $node fence-2-$node; done pcs resource create p-1 ocf:pacemaker:Dummy --no-default-ops pcs resource create p-2 ocf:pacemaker:Dummy --no-default-ops pcs constraint location p-1 prefers ${NODELIST[0]} pcs constraint location p-2 avoids ${NODELIST[0]} pcs resource create s-1 ocf:pacemaker:Stateful promotable --no-default-ops pcs constraint location s-1-clone rule role=master "#uname" eq ${NODELIST[0]} pcs resource create oc-1 ocf:pacemaker:Dummy --no-default-ops pcs resource create oc-2 ocf:pacemaker:Dummy --no-default-ops pcs constraint order oc-1 then oc-2 pcs constraint colocation add oc-2 with oc-1 pcs resource create oc-set-1 ocf:pacemaker:Dummy --no-default-ops pcs resource create oc-set-2 ocf:pacemaker:Dummy --no-default-ops pcs constraint order set oc-set-1 oc-set-2 pcs constraint colocation set oc-set-2 oc-set-1 pcs resource create t ocf:pacemaker:Dummy --no-default-ops pcs constraint ticket add Ticket t pcs constraint ticket set p-1 p-2 setoptions ticket=Ticket-set pcs alert create path=/usr/bin/true id=Alert pcs alert recipient add Alert value=recipient-value pcs resource defaults resource-stickiness=2 pcs resource op defaults timeout=90 pcs property set maintenance-mode=false pcs tag create TAG p-1 p-2 pcs resource defaults set create id=set-1 meta target-role=Started pcs resource op defaults set create id=op-set-1 score=10 meta interval=30s [root@r09-02-a ~]# rpm -q pcs pcs-0.11.4-7.el9.x86_64 [root@r09-02-a ~]# ./test_checkpoint_diff.sh r09-02-a: Authorized r09-02-b: Authorized No addresses specified for host 'r09-02-a', using 'r09-02-a' No addresses specified for host 'r09-02-b', using 'r09-02-b' Destroying cluster on hosts: 'r09-02-a', 'r09-02-b'... r09-02-a: Successfully destroyed cluster r09-02-b: Successfully destroyed cluster Requesting remove 'pcsd settings' from 'r09-02-a', 'r09-02-b' r09-02-a: successful removal of the file 'pcsd settings' r09-02-b: successful removal of the file 'pcsd settings' Sending 'corosync authkey', 'pacemaker authkey' to 'r09-02-a', 'r09-02-b' r09-02-a: successful distribution of the file 'corosync authkey' r09-02-a: successful distribution of the file 'pacemaker authkey' r09-02-b: successful distribution of the file 'corosync authkey' r09-02-b: successful distribution of the file 'pacemaker authkey' Sending 'corosync.conf' to 'r09-02-a', 'r09-02-b' r09-02-a: successful distribution of the file 'corosync.conf' r09-02-b: successful distribution of the file 'corosync.conf' Cluster has been successfully set up. Starting cluster on hosts: 'r09-02-a', 'r09-02-b'... Waiting for node(s) to start: 'r09-02-a', 'r09-02-b'... r09-02-a: Cluster started r09-02-b: Cluster started Deprecation Warning: Role value 'master' is deprecated and should not be used, use 'Promoted' instead Adding oc-1 oc-2 (kind: Mandatory) (Options: first-action=start then-action=start) Deprecation Warning: This command is deprecated and will be removed. Please use 'pcs resource defaults update' instead. Warning: Defaults do not apply to resources which override them with their own defined values Deprecation Warning: This command is deprecated and will be removed. Please use 'pcs resource op defaults update' instead. Warning: Defaults do not apply to resources which override them with their own defined values Warning: Defaults do not apply to resources which override them with their own defined values Warning: Defaults do not apply to resources which override them with their own defined values [root@r09-02-a ~]# pcs config checkpoint diff 1 live Differences between checkpoint 1 (-) and live configuration (+): Resources: + Resource: p-1 (class=ocf provider=pacemaker type=Dummy) + Operations: + monitor: p-1-monitor-interval-10s + interval=10s + timeout=20s + Resource: p-2 (class=ocf provider=pacemaker type=Dummy) + Operations: + monitor: p-2-monitor-interval-10s + interval=10s + timeout=20s + Resource: oc-1 (class=ocf provider=pacemaker type=Dummy) + Operations: + monitor: oc-1-monitor-interval-10s + interval=10s + timeout=20s + Resource: oc-2 (class=ocf provider=pacemaker type=Dummy) + Operations: + monitor: oc-2-monitor-interval-10s + interval=10s + timeout=20s + Resource: oc-set-1 (class=ocf provider=pacemaker type=Dummy) + Operations: + monitor: oc-set-1-monitor-interval-10s + interval=10s + timeout=20s + Resource: oc-set-2 (class=ocf provider=pacemaker type=Dummy) + Operations: + monitor: oc-set-2-monitor-interval-10s + interval=10s + timeout=20s + Resource: t (class=ocf provider=pacemaker type=Dummy) + Operations: + monitor: t-monitor-interval-10s + interval=10s + timeout=20s + Clone: s-1-clone + Meta Attributes: s-1-clone-meta_attributes + promotable=true + Resource: s-1 (class=ocf provider=pacemaker type=Stateful) + Operations: + monitor: s-1-monitor-interval-10s + interval=10s + timeout=20s + role=Promoted + monitor: s-1-monitor-interval-11s + interval=11s + timeout=20s + role=Unpromoted Stonith Devices: + Resource: fence-1-r09-02-a (class=stonith type=fence_xvm) + Operations: + monitor: fence-1-r09-02-a-monitor-interval-60s + interval=60s + Resource: fence-1-r09-02-b (class=stonith type=fence_xvm) + Operations: + monitor: fence-1-r09-02-b-monitor-interval-60s + interval=60s + Resource: fence-2-r09-02-a (class=stonith type=fence_xvm) + Operations: + monitor: fence-2-r09-02-a-monitor-interval-60s + interval=60s + Resource: fence-2-r09-02-b (class=stonith type=fence_xvm) + Operations: + monitor: fence-2-r09-02-b-monitor-interval-60s + interval=60s Fencing Levels: + Target: r09-02-a + Level 1 - fence-1-r09-02-a + Level 2 - fence-2-r09-02-a + Target: r09-02-b + Level 1 - fence-1-r09-02-b + Level 2 - fence-2-r09-02-b Location Constraints: + Resource: p-1 + Enabled on: + Node: r09-02-a (score:INFINITY) (id:location-p-1-r09-02-a-INFINITY) + Resource: p-2 + Disabled on: + Node: r09-02-a (score:-INFINITY) (id:location-p-2-r09-02-a--INFINITY) + Resource: s-1-clone + Constraint: location-s-1-clone + Rule: role=Promoted score=INFINITY (id:location-s-1-clone-rule) Expression: #uname eq r09-02-a (id:location-s-1-clone-rule-expr) Ordering Constraints: + start oc-1 then start oc-2 (kind:Mandatory) (id:order-oc-1-oc-2-mandatory) + Resource Sets: + set oc-set-1 oc-set-2 (id:order_set_o1o2_set) (id:order_set_o1o2) Colocation Constraints: + oc-2 with oc-1 (score:INFINITY) (id:colocation-oc-2-oc-1-INFINITY) + Resource Sets: + set oc-set-2 oc-set-1 (id:colocation_set_o2o1_set) setoptions score=INFINITY (id:colocation_set_o2o1) Ticket Constraints: + t ticket=Ticket (id:ticket-Ticket-t) + Resource Sets: + set p-1 p-2 (id:ticket_set_p1p2_set) setoptions ticket=Ticket-set (id:ticket_set_p1p2) Alerts: - No alerts defined + Alert: Alert (path=/usr/bin/true) + Recipients: + Recipient: Alert-recipient (value=recipient-value) Resources Defaults: Meta Attrs: build-resource-defaults - resource-stickiness=1 ? ^ + resource-stickiness=2 ? ^ + Meta Attrs: set-1 + target-role=Started Operations Defaults: - No defaults set + Meta Attrs: op_defaults-meta_attributes + timeout=90 + Meta Attrs: op-set-1 score=10 + interval=30s Cluster Properties: + cluster-infrastructure: corosync + cluster-name: HACluster + dc-version: 2.1.5-7.el9-a3f44794f94 + have-watchdog: false + maintenance-mode: false Tags: - No tags defined + TAG + p-1 + p-2