Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.
Bug 1386114 - add remote nodes configuration checks
add remote nodes configuration checks
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs (Show other bugs)
7.3
Unspecified Unspecified
high Severity unspecified
: rc
: ---
Assigned To: Ivan Devat
cluster-qe@redhat.com
Steven J. Levine
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-10-18 04:18 EDT by Tomas Jelinek
Modified: 2017-08-02 03:32 EDT (History)
9 users (show)

See Also:
Fixed In Version: pcs-0.9.158-3.el7
Doc Type: Release Note
Doc Text:
*pcs* now validates the name and the host of a remote and guest node Previously, the `pcs` command did not validate whether the name or the host of a remote or guest node conflicted with a resource ID or with a cluster node, a situation that would cause the cluster not to work correctly. With this fix, validation has been added to the relevant commands and `pcs` does not allow a user to configure a cluster with a conflicting name or conflicting host of a remote or guest node.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-08-01 14:24:40 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
proposed fix (part1) (388.08 KB, patch)
2017-05-25 04:35 EDT, Ivan Devat
no flags Details | Diff
proposed fix (part2) (130.59 KB, patch)
2017-05-25 04:36 EDT, Ivan Devat
no flags Details | Diff
proposed fix (part3) (266.66 KB, patch)
2017-05-25 04:37 EDT, Ivan Devat
no flags Details | Diff
proposed fix (part4) (988 bytes, patch)
2017-05-31 08:27 EDT, Ivan Devat
no flags Details | Diff


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:1958 normal SHIPPED_LIVE pcs bug fix and enhancement update 2017-08-01 14:09:47 EDT

  None (edit)
Description Tomas Jelinek 2016-10-18 04:18:15 EDT
Pcs supports two type of remote nodes: pacemaker remote and virtual domain.

Pacemaker remote nodes are created like this:
# pcs resource create <remote node name> ocf:pacemaker:remote [server=<host address>]
If the server option is not specified, it defaults to <remote node name>.

Virtual domain nodes are created like this:
# pcs resource create <resource id> ocf:heartbeat:VirtualDomain
config=<path to libvirt config>
# pcs cluster remote-node add <remote node name> <resource id> [remote-addr=<host address>]
If the remote-addr option is not specified, it defaults to <remote node name>.

Pcs should check that:
* there are no remote nodes with the same <remote node name>
* there are no remote nodes with the same <host address>
* there is no ID in the CIB which is the same as <remote node name>
Some of the checks are done automatically as pcs does not allow duplicate IDs.

Affected commands:
* pcs resource create
* pcs resource update
* pcs resource meta
* pcs cluster remote-node add
We also need to fix the function which ensures ID uniqueness to reflect remote node names.
Comment 2 Ivan Devat 2017-05-25 04:35 EDT
Created attachment 1282149 [details]
proposed fix (part1)
Comment 3 Ivan Devat 2017-05-25 04:36 EDT
Created attachment 1282150 [details]
proposed fix (part2)
Comment 4 Ivan Devat 2017-05-25 04:37 EDT
Created attachment 1282151 [details]
proposed fix (part3)
Comment 5 Ivan Devat 2017-05-30 06:06:34 EDT
There are additional problems:

> traceback when there is conflict when adding remote node that has name

[vm-rhel72-1 ~] $ pcs resource create R ocf:heartbeat:Dummy
[vm-rhel72-1 ~] $ pcs cluster node add-remote vm-rhel72-2 R
Traceback (most recent call last):
  File "/usr/sbin/pcs", line 9, in <module>
    load_entry_point('pcs==0.9.158', 'console_scripts', 'pcs')()
  File "/usr/lib/python2.7/site-packages/pcs/app.py", line 191, in main
    cmd_map[command](argv)
  File "/usr/lib/python2.7/site-packages/pcs/cluster.py", line 190, in cluster_cmd
    utils.get_modificators()
  File "/usr/lib/python2.7/site-packages/pcs/cli/cluster/command.py", line 50, in node_add_remote
    wait=modifiers["wait"],
  File "/usr/lib/python2.7/site-packages/pcs/cli/common/lib_wrapper.py", line 97, in decorated_run
    return run_with_middleware(run, cli_env, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/pcs/cli/common/middleware.py", line 20, in run
    return next_in_line(env, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/pcs/cli/common/middleware.py", line 35, in apply
    result_of_next = next_in_line(env, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/pcs/cli/common/middleware.py", line 54, in apply
    result_of_next = next_in_line(env, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/pcs/cli/common/lib_wrapper.py", line 87, in run
    lib_call_result = run_library_command(lib_env, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/pcs/lib/commands/cluster.py", line 192, in node_add_remote
    elif report.info.get["id"] not in already_exists:
TypeError: 'builtin_function_or_method' object has no attribute '__getitem__'

> cannot create resource without corosync.conf

[vm-rhel72-1 ~] $ mv /etc/corosync/corosync.conf /etc/corosync/corosync.conf.backup
[vm-rhel72-1 ~] $ pcs resource create -f cib.xml A ocf:heartbeat:Dummy
Error: Unable to read /etc/corosync/corosync.conf: No such file or directory
Comment 6 Ivan Devat 2017-05-31 08:27 EDT
Created attachment 1283760 [details]
proposed fix (part4)
Comment 7 Ivan Devat 2017-05-31 08:31:15 EDT
After Fix:

[vm-rhel72-1 ~] $ rpm -q pcs
pcs-0.9.158-3.el7.x86_64

> 1) Existing remote node

> crate a remote node:

[vm-rhel72-1 ~] $ pcs cluster node add-remote vm-rhel72-2 remote
Sending remote node configuration files to 'vm-rhel72-2'
vm-rhel72-2: successful distribution of the file 'pacemaker_remote authkey'
Requesting start of service pacemaker_remote on 'vm-rhel72-2'
vm-rhel72-2: successful run of 'pacemaker_remote enable'
vm-rhel72-2: successful run of 'pacemaker_remote start'

> not possible to create a remote node with the same host:

[vm-rhel72-1 ~] $ pcs cluster node add-remote vm-rhel72-2 remote
Sending remote node configuration files to 'vm-rhel72-2'
vm-rhel72-2: successful distribution of the file 'pacemaker_remote authkey'
Requesting start of service pacemaker_remote on 'vm-rhel72-2'
vm-rhel72-2: successful run of 'pacemaker_remote enable'
vm-rhel72-2: successful run of 'pacemaker_remote start'
[vm-rhel72-1 ~] $ pcs cluster node add-remote vm-rhel72-2 remote2
Error: 'vm-rhel72-2' already exists
[vm-rhel72-1 ~] $ echo $?
1

[vm-rhel72-1 ~] $ pcs resource create remote3 ocf:pacemaker:remote server=vm-rhel72-2 --force
Warning: this command is not sufficient for creating a remote connection, use 'pcs cluster node add-remote'
Error: 'vm-rhel72-2' already exists
[vm-rhel72-1 ~] $ echo $?
1

> not possible to create a remote node with the same name:

[vm-rhel72-1 ~] $ pcs cluster node add-remote vm-rhel72-4 remote
Error: 'remote' already exists
[vm-rhel72-1 ~] $ echo $?
1

[vm-rhel72-1 ~] $ pcs resource create remote ocf:pacemaker:remote server=vm-rhel72-4 --force
Warning: this command is not sufficient for creating a remote connection, use 'pcs cluster node add-remote'
Error: 'remote' already exists
[vm-rhel72-1 ~] $ echo $?
1

> not possible to create a guest node with the same host:

[vm-rhel72-1 ~] $ pcs resource create R ocf:heartbeat:Dummy
[vm-rhel72-1 ~] $ pcs cluster node add-guest vm-rhel72-2 R
Error: 'vm-rhel72-2' already exists
[vm-rhel72-1 ~] $ echo $?
1

[vm-rhel72-1 ~] $ pcs resource update R meta remote-node=vm-rhel72-2
Error: this command is not sufficient for creating a guest node, use 'pcs cluster node add-guest', use --force to override
Error: 'vm-rhel72-2' already exists
[vm-rhel72-1 ~] $ echo $?
1
[vm-rhel72-1 ~] $ pcs resource update R meta remote-node=remote1 remote-addr=vm-rhel72-2
Error: this command is not sufficient for creating a guest node, use 'pcs cluster node add-guest', use --force to override
Error: 'vm-rhel72-2' already exists
[vm-rhel72-1 ~] $ echo $?
1

[vm-rhel72-1 ~] $ pcs resource meta R remote-node=vm-rhel72-2
Error: this command is not sufficient for creating a guest node, use 'pcs cluster node add-guest', use --force to override
Error: 'vm-rhel72-2' already exists
[vm-rhel72-1 ~] $ echo $?
1
[vm-rhel72-1 ~] $ pcs resource meta R remote-node=remote1 remote-addr=vm-rhel72-2
Error: this command is not sufficient for creating a guest node, use 'pcs cluster node add-guest', use --force to override
Error: 'vm-rhel72-2' already exists
[vm-rhel72-1 ~] $ echo $?
1

[vm-rhel72-1 ~] $ pcs resource create S ocf:heartbeat:Dummy meta remote-node=vm-rhel72-2
Error: this command is not sufficient for creating a guest node, use 'pcs cluster node add-guest', use --force to override
Error: 'vm-rhel72-2' already exists
[vm-rhel72-1 ~] $ echo $?
1
[vm-rhel72-1 ~] $ pcs resource create S ocf:heartbeat:Dummy meta remote-node=remote1 remote-addr=vm-rhel72-2
Error: this command is not sufficient for creating a guest node, use 'pcs cluster node add-guest', use --force to override
Error: 'vm-rhel72-2' already exists
[vm-rhel72-1 ~] $ echo $?
1

> 2) Existing guest node

> crate a guest node:

[vm-rhel72-1 ~] $ pcs resource create R ocf:heartbeat:Dummy
[vm-rhel72-1 ~] $ pcs cluster node add-guest GUEST R remote-addr=vm-rhel72-2
Sending remote node configuration files to 'vm-rhel72-2'
vm-rhel72-2: successful distribution of the file 'pacemaker_remote authkey'
Requesting start of service pacemaker_remote on 'vm-rhel72-2'
vm-rhel72-2: successful run of 'pacemaker_remote enable'
vm-rhel72-2: successful run of 'pacemaker_remote start'

> not possible to create a remote node with the same host:

[vm-rhel72-1 ~] $ pcs resource create R2 ocf:heartbeat:Dummy
[vm-rhel72-1 ~] $ pcs cluster node add-guest vm-rhel72-2 R2
Error: 'vm-rhel72-2' already exists
[vm-rhel72-1 ~] $ echo $?
1

[vm-rhel72-1 ~] $ pcs resource create R3 ocf:pacemaker:remote server=vm-rhel72-2
Error: this command is not sufficient for creating a remote connection, use 'pcs cluster node add-remote', use --force to override
Error: 'vm-rhel72-2' already exists
[vm-rhel72-1 ~] $ echo $?
1

> not possible to create a remote node with the same name:

[vm-rhel72-1 ~] $ pcs cluster node add-remote vm-rhel72-2 R3
Error: 'vm-rhel72-2' already exists
[vm-rhel72-1 ~] $ echo $?
1

[vm-rhel72-1 ~] $ pcs resource create R2 ocf:pacemaker:remote server=vm-rhel72-4 --force
Warning: this command is not sufficient for creating a remote connection, use 'pcs cluster node add-remote'
Error: 'R2' already exists
[vm-rhel72-1 ~] $ echo $?
1

> not possible to create a guest node with the same host:

[vm-rhel72-1 ~] $ pcs resource update R2 meta remote-node=vm-rhel72-1 --force
Warning: this command is not sufficient for creating a guest node, use 'pcs cluster node add-guest'
Error: 'vm-rhel72-1' already exists
[vm-rhel72-1 ~] $ echo $?
1
[vm-rhel72-1 ~] $ pcs resource update R2 meta remote-node=N remote-addr=vm-rhel72-1 --force
Warning: this command is not sufficient for creating a guest node, use 'pcs cluster node add-guest'
Error: 'vm-rhel72-1' already exists
[vm-rhel72-1 ~] $ echo $?
1

[vm-rhel72-1 ~] $ pcs resource meta R2 remote-node=vm-rhel72-1 --force
Warning: this command is not sufficient for creating a guest node, use 'pcs cluster node add-guest'
Error: 'vm-rhel72-1' already exists
[vm-rhel72-1 ~] $ echo $?
1
[vm-rhel72-1 ~] $ pcs resource meta R2 remote-node=N remote-addr=vm-rhel72-1 --force
Warning: this command is not sufficient for creating a guest node, use 'pcs cluster node add-guest'
Error: 'vm-rhel72-1' already exists
[vm-rhel72-1 ~] $ echo $?
1

> not possible to create a conflicting resource:

[vm-rhel72-1 ~] $ pcs resource create GUEST ocf:heartbeat:Dummy
Error: 'GUEST' already exists

> 3) Conflict remote node with existing resource

[vm-rhel72-1 ~] $ pcs cluster node add-remote vm-rhel72-4 R
Error: 'R' already exists

> 4) Missing corosync.conf

[vm-rhel72-1 ~] $ mv /etc/corosync/corosync.conf /etc/corosync/corosync.conf.backup
[vm-rhel72-1 ~] $ pcs resource create -f cib.xml A ocf:heartbeat:Dummy
[vm-rhel72-1 ~] $ echo $?
0
Comment 13 errata-xmlrpc 2017-08-01 14:24:40 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1958

Note You need to log in before you can comment on or make changes to this bug.