RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1466789 - Missing duplicity check for remote node address in 'resource update' command
Summary: Missing duplicity check for remote node address in 'resource update' command
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs
Version: 7.4
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: rc
: ---
Assignee: Tomas Jelinek
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-06-30 12:57 UTC by Radek Steiger
Modified: 2020-09-16 15:02 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1673834 (view as bug list)
Environment:
Last Closed: 2020-09-16 15:02:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1386114 0 high CLOSED add remote nodes configuration checks 2021-02-22 00:41:40 UTC

Internal Links: 1386114

Description Radek Steiger 2017-06-30 12:57:42 UTC
> Description of problem:

In bug 1386114 we've improved the workflow of managing remote and guest nodes including a variety of check to prevent duplicities. In following case the check fails to detect a potential duplicity:

[root@host-031 ~]# pcs cluster node add-remote host-032 MyNode1
Sending remote node configuration files to 'host-032'
host-032: successful distribution of the file 'pacemaker_remote authkey'
Requesting start of service pacemaker_remote on 'host-032'
host-032: successful run of 'pacemaker_remote enable'
host-032: successful run of 'pacemaker_remote start'

[root@host-031 ~]# pcs cluster node add-remote host-033 MyNode2
Sending remote node configuration files to 'host-033'
host-033: successful distribution of the file 'pacemaker_remote authkey'
Requesting start of service pacemaker_remote on 'host-033'
host-033: successful run of 'pacemaker_remote enable'
host-033: successful run of 'pacemaker_remote start'

[root@host-030 ~]# pcs status
...
Online: [ host-030 host-031 ]
RemoteOnline: [ MyNode1 MyNode2 ]
...

Running update:

[root@host-031 ~]# pcs resource update MyNode2 server=host-032
[root@host-031 ~]#

Results in (note that identical value="host-032" is present in both primitives in CIB):

[root@host-030 ~]# pcs status
...
Online: [ host-030 host-031 ]
RemoteOnline: [ MyNode1 ]
RemoteOFFLINE: [ MyNode2 ]
...

[root@host-030 ~]# pcs cluster cib
...
      <primitive class="ocf" id="MyNode1" provider="pacemaker" type="remote">
        <instance_attributes id="MyNode1-instance_attributes">
          <nvpair id="MyNode1-instance_attributes-server" name="server" value="host-032"/>
        </instance_attributes>
        <operations>
          <op id="MyNode1-monitor-interval-60s" interval="60s" name="monitor" timeout="30"/>
          <op id="MyNode1-start-interval-0s" interval="0s" name="start" timeout="60"/>
          <op id="MyNode1-stop-interval-0s" interval="0s" name="stop" timeout="60"/>
        </operations>
      </primitive>
      <primitive class="ocf" id="MyNode2" provider="pacemaker" type="remote">
        <instance_attributes id="MyNode2-instance_attributes">
          <nvpair id="MyNode2-instance_attributes-server" name="server" value="host-032"/>
        </instance_attributes>
        <operations>
          <op id="MyNode2-monitor-interval-60s" interval="60s" name="monitor" timeout="30"/>
          <op id="MyNode2-start-interval-0s" interval="0s" name="start" timeout="60"/>
          <op id="MyNode2-stop-interval-0s" interval="0s" name="stop" timeout="60"/>
        </operations>
        <meta_attributes id="MyNode2-meta_attributes"/>
      </primitive>
...



> Version-Release number of selected component (if applicable):

pcs-0.9.158-6.el7


> How reproducible:

EZ


> Steps to Reproduce:

1. Create first remote node
2. Create second remote node
3. Update one of the remote nodes to have the same server name


> Actual results:

Duplicity not detected.


> Expected results:

Something like "Error: 'host-032' already exists"

Comment 2 Tomas Jelinek 2017-06-30 15:09:40 UTC
The "pcs resource meta" command should be checked if the same bug occurs there as well.

Also pcs should emit a warning these commands are not meant for managing remote and guest nodes (as merely updating the cib does not check if the node is available, distribute pcmk authkey and so on).

Comment 3 Radek Steiger 2017-06-30 15:17:50 UTC
@Tomas: The "pcs resource meta" checks for guest nodes are working fine as per bug 1386114 test results.

Comment 7 Tomas Jelinek 2020-09-16 15:02:31 UTC
This is tracked for RHEL 8: bz1673834.
Considering the current RHEL 7 life cycle stage, there will be no fix for RHEL 7.


Note You need to log in before you can comment on or make changes to this bug.