Bug 1466789 - Missing duplicity check for remote node address in 'resource update' command
Missing duplicity check for remote node address in 'resource update' command
Status: NEW
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs (Show other bugs)
7.4
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: Tomas Jelinek
cluster-qe@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-06-30 08:57 EDT by Radek Steiger
Modified: 2017-07-21 07:03 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Radek Steiger 2017-06-30 08:57:42 EDT
> Description of problem:

In bug 1386114 we've improved the workflow of managing remote and guest nodes including a variety of check to prevent duplicities. In following case the check fails to detect a potential duplicity:

[root@host-031 ~]# pcs cluster node add-remote host-032 MyNode1
Sending remote node configuration files to 'host-032'
host-032: successful distribution of the file 'pacemaker_remote authkey'
Requesting start of service pacemaker_remote on 'host-032'
host-032: successful run of 'pacemaker_remote enable'
host-032: successful run of 'pacemaker_remote start'

[root@host-031 ~]# pcs cluster node add-remote host-033 MyNode2
Sending remote node configuration files to 'host-033'
host-033: successful distribution of the file 'pacemaker_remote authkey'
Requesting start of service pacemaker_remote on 'host-033'
host-033: successful run of 'pacemaker_remote enable'
host-033: successful run of 'pacemaker_remote start'

[root@host-030 ~]# pcs status
...
Online: [ host-030 host-031 ]
RemoteOnline: [ MyNode1 MyNode2 ]
...

Running update:

[root@host-031 ~]# pcs resource update MyNode2 server=host-032
[root@host-031 ~]#

Results in (note that identical value="host-032" is present in both primitives in CIB):

[root@host-030 ~]# pcs status
...
Online: [ host-030 host-031 ]
RemoteOnline: [ MyNode1 ]
RemoteOFFLINE: [ MyNode2 ]
...

[root@host-030 ~]# pcs cluster cib
...
      <primitive class="ocf" id="MyNode1" provider="pacemaker" type="remote">
        <instance_attributes id="MyNode1-instance_attributes">
          <nvpair id="MyNode1-instance_attributes-server" name="server" value="host-032"/>
        </instance_attributes>
        <operations>
          <op id="MyNode1-monitor-interval-60s" interval="60s" name="monitor" timeout="30"/>
          <op id="MyNode1-start-interval-0s" interval="0s" name="start" timeout="60"/>
          <op id="MyNode1-stop-interval-0s" interval="0s" name="stop" timeout="60"/>
        </operations>
      </primitive>
      <primitive class="ocf" id="MyNode2" provider="pacemaker" type="remote">
        <instance_attributes id="MyNode2-instance_attributes">
          <nvpair id="MyNode2-instance_attributes-server" name="server" value="host-032"/>
        </instance_attributes>
        <operations>
          <op id="MyNode2-monitor-interval-60s" interval="60s" name="monitor" timeout="30"/>
          <op id="MyNode2-start-interval-0s" interval="0s" name="start" timeout="60"/>
          <op id="MyNode2-stop-interval-0s" interval="0s" name="stop" timeout="60"/>
        </operations>
        <meta_attributes id="MyNode2-meta_attributes"/>
      </primitive>
...



> Version-Release number of selected component (if applicable):

pcs-0.9.158-6.el7


> How reproducible:

EZ


> Steps to Reproduce:

1. Create first remote node
2. Create second remote node
3. Update one of the remote nodes to have the same server name


> Actual results:

Duplicity not detected.


> Expected results:

Something like "Error: 'host-032' already exists"
Comment 2 Tomas Jelinek 2017-06-30 11:09:40 EDT
The "pcs resource meta" command should be checked if the same bug occurs there as well.

Also pcs should emit a warning these commands are not meant for managing remote and guest nodes (as merely updating the cib does not check if the node is available, distribute pcmk authkey and so on).
Comment 3 Radek Steiger 2017-06-30 11:17:50 EDT
@Tomas: The "pcs resource meta" checks for guest nodes are working fine as per bug 1386114 test results.

Note You need to log in before you can comment on or make changes to this bug.