RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1176018 - pcs/pcsd should be able to configure pacemaker remote
Summary: pcs/pcsd should be able to configure pacemaker remote
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs
Version: 7.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: Ivan Devat
QA Contact: cluster-qe@redhat.com
Steven J. Levine
URL:
Whiteboard:
Depends On:
Blocks: 1450880
TreeView+ depends on / blocked
 
Reported: 2014-12-19 07:37 UTC by Fabio Massimo Di Nitto
Modified: 2017-08-01 18:22 UTC (History)
9 users (show)

Fixed In Version: pcs-0.9.158-3.el7
Doc Type: Release Note
Doc Text:
New commands for supporting and removing remote and guest nodes Red Hat Enterprise Linux 7.4 provides the following new commands for creating and removing remote and guest nodes: * pcs cluster node add-guest * pcs cluster node remove-guest * pcs cluster node add-remote * pcs cluster node remove-remote These commands replace the `pcs cluster remote-node add` and `pcs cluster remote-node remove` commands, which have been deprecated.
Clone Of:
Environment:
Last Closed: 2017-08-01 18:22:57 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
proposed fix (part1) (388.08 KB, patch)
2017-05-25 08:45 UTC, Ivan Devat
no flags Details | Diff
proposed fix (part2) (130.59 KB, patch)
2017-05-25 08:46 UTC, Ivan Devat
no flags Details | Diff
proposed fix (part3) (266.66 KB, patch)
2017-05-25 08:47 UTC, Ivan Devat
no flags Details | Diff
proposed fix - backup and restore keys (6.00 KB, patch)
2017-05-25 08:58 UTC, Tomas Jelinek
no flags Details | Diff
additional fixes (6.42 KB, patch)
2017-05-25 16:27 UTC, Tomas Jelinek
no flags Details | Diff
proposed fix (part6) (12.37 KB, patch)
2017-05-31 12:19 UTC, Ivan Devat
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1210833 0 unspecified CLOSED pcs is not fully distributed (contains local-only set/query operations) 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1254984 0 low CLOSED Support resource name as an identifier in 'remote-node remove' 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1300413 0 unspecified CLOSED [packaging] please split pcs to separate RPMs for CLI and GUI 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1386512 0 high CLOSED clarify remote nodes terminology 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1459503 0 urgent CLOSED OpenStack is not compatible with pcs management of remote and guest nodes 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHBA-2017:1958 0 normal SHIPPED_LIVE pcs bug fix and enhancement update 2017-08-01 18:09:47 UTC


Description Fabio Massimo Di Nitto 2014-12-19 07:37:19 UTC
several different issues in configuring and managing pacemaker_remoted. Feel free to break it up into multiple bugs

First issue, managing remote nodes:

http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Remote/index.html#idm254594286624

section 5.2 specifically.

assuming pcsd is authenticated with the remote node, it should be possible to generate/distribute /etc/pacemaker/authkey around.

Second issue, adding/remove a remote node:

(same doc) section 5.7 and pcs cluster remote-node "add|remove".

This bits are misleading at best. Any operation under cluster should be specific to the cluster/nodes.

I would expect pcs cluster remote-node add ... to replace:
 pcs resource create remote1 ocf:pacemaker:remote
or alias at least.

Similar when removing, if I delete the resource with pcs resource delete remote1, the remote node does not vanish from crm_mor output (this could be a pcmk bug)

IMHO pcs cluster remote-node * should manage the nodes and not resources.

pcs constraint remote-node .... instead is a better place for associating resources with remote nodes.

Third issue, pcs cluster destroy ... does not wipe /etc/pacemaker/authkey

Comment 5 Ken Gaillot 2016-02-25 22:22:27 UTC
Considerations for the future:

* I believe the documentation currently does not recommend that users install pcs (and therefore pcsd) on remote nodes, because not all pcs commands will work when run from a remote node command line (even "pcs status" fails because it uses "crm_node -l"). So, if we want pcs to take action on the remote node itself, we'll have to update the documentation accordingly.

* pcs currently depends on pacemaker, but remote nodes should not be required to install pacemaker. Not sure of the best way around that if we want pcsd on remote nodes. Perhaps pacemaker and pacemaker-remote could both provide a virtual package (e.g. pacemaker-daemon) that pcs could depend on, but that might break existing workflows that assume pcs will drag in pacemaker.

* There is some confusion in that remote nodes come in two flavors, those created by ocf:pacemaker:remote resource, and those created by the remote-node meta-attribute of another resource (such as VirtualDomain). This is why Fabio found the pcs cluster remote-node command misleading. Upstream documentation now tries to consistently refer to the first kind as "remote nodes" and the second kind as "guest nodes". I wonder if, instead of a separate remote-node command, we could use options to pcs cluster node, e.g. "pcs cluster node add --remote mynode" or "pcs cluster node add --guest=myvm mynode".

* Expanding on the previous point, if we ran pcsd on remote nodes, --start/--enable could be meaningful with the previous suggestion. The command could additionally copy the authentication key etc.

* Per the upstream documentation at http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Remote/index.html#idm140473070951888 , there are a variety of options that can be configured for remote nodes. One complication is that some options (e.g. a non-default port number) must be configured both in the CIB and in /etc/sysconfig/pacemaker on the remote node (and of course must match). Another complication is that the CIB options are different for remote nodes and guest nodes.

* Regarding a remote node staying in status after its resource is removed from the configuration, that is expected behavior. The history of the node itself must be removed with "crm_node --force --remove $NODE_NAME". I believe that's true even of cluster nodes.

Comment 7 Jan Pokorný [poki] 2016-10-24 12:37:57 UTC
> * pcs currently depends on pacemaker, but remote nodes should not be
>   required to install pacemaker. Not sure of the best way around that
>   if we want pcsd on remote nodes. Perhaps pacemaker and
>   pacemaker-remote could both provide a virtual package (e.g.
>   pacemaker-daemon) that pcs could depend on, but that might break
>   existing workflows that assume pcs will drag in pacemaker.

Making such an assumption is IMHO broken from the beginning, just as
combining both client (pcs) and server part (pcsd) into single
package.  At least I still do hope pcs will be able to operate merely
remotely one day, the same way as ccs can operate in the old stack.

There's [bug 1210833] for that.  That would require that pcs (CLI part)
is a self-contained client, without any cluster stack related Requires.

Comment 8 Jan Pokorný [poki] 2016-10-24 12:47:06 UTC
That being told, thre is [bug 1300413] asking for splitting
the monolithic package that supports previous sketch of
"ideal state".

Comment 9 Jan Pokorný [poki] 2016-10-24 12:51:26 UTC
Additionally, "pcs cluster status" at the RHEL 6 remote node will show:

> Error: Unable to read /etc/cluster/cluster.conf:
>        No such file or directory

which is irrelevant in this scenario.

Comment 10 Jan Pokorný [poki] 2016-10-25 08:37:24 UTC
re [comment 7]:

There's another issue I've just observed in RHEL 7.3:

- install pacemaker/pcs
- remove pacemaker, it will remove pcs along
- install pacemaker-remote, wonder why only pcs has been previously
  removed when at least pcs CLI part would come handy beyond the
  lifetime of pacemaker installation

Comment 11 Jan Pokorný [poki] 2016-10-25 09:04:34 UTC
+ if you subsequently want to install pcs, it will bring pacemaker back
  when not needed at all

This sub-issue should be rectified as of [bug 1388398].

Comment 16 Ivan Devat 2017-05-25 08:45:29 UTC
Created attachment 1282152 [details]
proposed fix (part1)

Comment 17 Ivan Devat 2017-05-25 08:46:17 UTC
Created attachment 1282153 [details]
proposed fix (part2)

Comment 18 Ivan Devat 2017-05-25 08:47:06 UTC
Created attachment 1282154 [details]
proposed fix (part3)

Comment 19 Tomas Jelinek 2017-05-25 08:58:04 UTC
Created attachment 1282163 [details]
proposed fix - backup and restore keys

Comment 20 Tomas Jelinek 2017-05-25 13:59:37 UTC
Pcs is unable to destroy a stopped cluster. It is trying to load the CIB to destroy remote and guest nodes as well. When the cluster is stopped, pcs crashes with exception ERROR CIB_LOAD_ERROR Signon to CIB failed: Transport endpoint is not connected Init failed, could not perform requested operations

Comment 21 Tomas Jelinek 2017-05-25 14:02:49 UTC
Pcs crashes on cluster setup when --force is used:

# pcs cluster setup --name test rh73-node1 rh73-node2 --force
Destroying cluster on nodes: rh73-node1, rh73-node2...
rh73-node2: Stopping Cluster (pacemaker)...
rh73-node1: Stopping Cluster (pacemaker)...
rh73-node2: Successfully destroyed cluster
rh73-node1: Successfully destroyed cluster

Traceback (most recent call last):
  File "/usr/sbin/pcs", line 9, in <module>
    load_entry_point('pcs==0.9.158', 'console_scripts', 'pcs')()
  File "/usr/lib/python2.7/site-packages/pcs/app.py", line 191, in main
    cmd_map[command](argv)
  File "/usr/lib/python2.7/site-packages/pcs/cluster.py", line 85, in cluster_cmd
    cluster_setup([utils.pcs_options["--name"]] + argv)
  File "/usr/lib/python2.7/site-packages/pcs/cluster.py", line 462, in cluster_setup
    lib_env.node_communicator(),
UnboundLocalError: local variable 'lib_env' referenced before assignment

Comment 22 Tomas Jelinek 2017-05-25 16:05:01 UTC
It is not possible to add a node to a stopped cluster:
# pcs cluster node add rh73-node3
Disabling SBD service...
rh73-node3: sbd disabled
Sending booth configuration to cluster nodes...
rh73-node3: Booth config(s) (booth.conf, booth.key) saved.
Error: unable to get cib

Comment 23 Tomas Jelinek 2017-05-25 16:27:25 UTC
Created attachment 1282323 [details]
additional fixes

Comment 24 Tomas Jelinek 2017-05-26 12:31:10 UTC
After fix:

[root@rh73-node1:~]# rpm -q pcs
pcs-0.9.158-2.el7.x86_64

> the authkey is distributed to a remote node:

[root@rh73-node3:~]# ls -l /etc/pacemaker/authkey
ls: cannot access /etc/pacemaker/authkey: No such file or directory

[root@rh73-node1:~]# pcs cluster node add-remote rh73-node3
Sending remote node configuration files to 'rh73-node3'
rh73-node3: successful distribution of the file 'pacemaker_remote authkey'
Requesting start of service pacemaker_remote on 'rh73-node3'
rh73-node3: successful run of 'pacemaker_remote enable'
rh73-node3: successful run of 'pacemaker_remote start'

[root@rh73-node3:~]# ls -l /etc/pacemaker/authkey
-r--------. 1 hacluster haclient 64 May 26 14:18 /etc/pacemaker/authkey

> a remote node removal - the authkey is deleted, node vanishes from status:

[root@rh73-node1:~]# pcs status
Cluster name: rhel73
Stack: corosync
Current DC: rh73-node2 (version 1.1.16-9.el7-94ff4df) - partition with quorum
Last updated: Fri May 26 14:21:55 2017
Last change: Fri May 26 14:18:43 2017 by root via cibadmin on rh73-node1

3 nodes configured
5 resources configured

Online: [ rh73-node1 rh73-node2 ]
RemoteOnline: [ rh73-node3 ]

Full list of resources:

 xvmNode1       (stonith:fence_xvm):    Started rh73-node2
 xvmNode2       (stonith:fence_xvm):    Started rh73-node1
 xvmNode3       (stonith:fence_xvm):    Started rh73-node2
 dummy  (ocf::pacemaker:Dummy): Started rh73-node3
 rh73-node3     (ocf::pacemaker:remote):        Started rh73-node1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@rh73-node1:~]# pcs cluster node remove-remote rh73-node3
Attempting to stop: rh73-node3...Stopped
Requesting stop of service pacemaker_remote on 'rh73-node3'
rh73-node3: successful run of 'pacemaker_remote disable'
rh73-node3: successful run of 'pacemaker_remote stop'
Requesting remove remote node files from 'rh73-node3'
rh73-node3: successful removal of the file 'pacemaker_remote authkey'
[root@rh73-node1:~]# pcs status
Cluster name: rhel73
Stack: corosync
Current DC: rh73-node2 (version 1.1.16-9.el7-94ff4df) - partition with quorum
Last updated: Fri May 26 14:22:11 2017
Last change: Fri May 26 14:22:07 2017 by root via cibadmin on rh73-node1

2 nodes configured
4 resources configured

Online: [ rh73-node1 rh73-node2 ]

Full list of resources:

 xvmNode1       (stonith:fence_xvm):    Started rh73-node2
 xvmNode2       (stonith:fence_xvm):    Started rh73-node1
 xvmNode3       (stonith:fence_xvm):    Started rh73-node2
 dummy  (ocf::pacemaker:Dummy): Started rh73-node1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@rh73-node3:~]# ls -l /etc/pacemaker/authkey
ls: cannot access /etc/pacemaker/authkey: No such file or directory

> the authkey is distributed to a guest node:

[root@rh73-node3:~]# ls -l /etc/pacemaker/authkey
ls: cannot access /etc/pacemaker/authkey: No such file or directory
[root@rh73-node1:~]# pcs cluster node add-guest rh73-node3 dummy-guest
Sending remote node configuration files to 'rh73-node3'
rh73-node3: successful distribution of the file 'pacemaker_remote authkey'
Requesting start of service pacemaker_remote on 'rh73-node3'
rh73-node3: successful run of 'pacemaker_remote enable'
rh73-node3: successful run of 'pacemaker_remote start'
[root@rh73-node3:~]# ls -l /etc/pacemaker/authkey
-r--------. 1 hacluster haclient 64 May 26 14:23 /etc/pacemaker/authkey

> a guest node removal - the authkey is deleted, node vanishes from status:

[root@rh73-node1:~]# pcs status
Cluster name: rhel73
Stack: corosync
Current DC: rh73-node2 (version 1.1.16-9.el7-94ff4df) - partition with quorum
Last updated: Fri May 26 14:24:04 2017
Last change: Fri May 26 14:23:37 2017 by root via cibadmin on rh73-node1

3 nodes configured
6 resources configured

Online: [ rh73-node1 rh73-node2 ]
GuestOnline: [ rh73-node3@rh73-node1 ]

Full list of resources:

 xvmNode1       (stonith:fence_xvm):    Started rh73-node2
 xvmNode2       (stonith:fence_xvm):    Started rh73-node2
 xvmNode3       (stonith:fence_xvm):    Started rh73-node2
 dummy  (ocf::pacemaker:Dummy): Started rh73-node3
 dummy-guest    (ocf::pacemaker:Dummy): Started rh73-node1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@rh73-node1:~]# pcs cluster node remove-guest rh73-node3
Requesting stop of service pacemaker_remote on 'rh73-node3'
rh73-node3: successful run of 'pacemaker_remote disable'
rh73-node3: successful run of 'pacemaker_remote stop'
Requesting remove remote node files from 'rh73-node3'
rh73-node3: successful removal of the file 'pacemaker_remote authkey'
[root@rh73-node1:~]# pcs status
Cluster name: rhel73
Stack: corosync
Current DC: rh73-node2 (version 1.1.16-9.el7-94ff4df) - partition with quorum
Last updated: Fri May 26 14:24:38 2017
Last change: Fri May 26 14:24:15 2017 by root via cibadmin on rh73-node1

2 nodes configured
5 resources configured

Online: [ rh73-node1 rh73-node2 ]

Full list of resources:

 xvmNode1       (stonith:fence_xvm):    Started rh73-node2
 xvmNode2       (stonith:fence_xvm):    Started rh73-node1
 xvmNode3       (stonith:fence_xvm):    Started rh73-node1
 dummy  (ocf::pacemaker:Dummy): Started rh73-node2
 dummy-guest    (ocf::pacemaker:Dummy): Started rh73-node1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@rh73-node3:~]# ls -l /etc/pacemaker/authkey
ls: cannot access /etc/pacemaker/authkey: No such file or directory

> cluster destroy deletes the authkey:

[root@rh73-node1:~]# ll /etc/pacemaker/authkey 
-r--------. 1 hacluster haclient 64 May 25 18:08 /etc/pacemaker/authkey
[root@rh73-node1:~]# pcs cluster destroy --all
rh73-node1: Stopping Cluster (pacemaker)...
rh73-node2: Stopping Cluster (pacemaker)...
rh73-node1: Successfully destroyed cluster
rh73-node2: Successfully destroyed cluster
[root@rh73-node1:~]# ll /etc/pacemaker/authkey 
ls: cannot access /etc/pacemaker/authkey: No such file or directory

> it is possible do destroy a stopped cluster:

[root@rh73-node1:~]# pcs cluster start --all --wait
rh73-node2: Starting Cluster...
rh73-node1: Starting Cluster...
Waiting for node(s) to start...
rh73-node1: Started
rh73-node2: Started
[root@rh73-node1:~]# pcs cluster stop --all
rh73-node1: Stopping Cluster (pacemaker)...
rh73-node2: Stopping Cluster (pacemaker)...
rh73-node2: Stopping Cluster (corosync)...
rh73-node1: Stopping Cluster (corosync)...
[root@rh73-node1:~]# pcs cluster destroy --all
Warning: Unable to load CIB to get guest and remote nodes from it, those nodes will not be deconfigured.
rh73-node2: Stopping Cluster (pacemaker)...
rh73-node1: Stopping Cluster (pacemaker)...
rh73-node2: Successfully destroyed cluster
rh73-node1: Successfully destroyed cluster
[root@rh73-node1:~]# echo $?
0

> pcs cluster setup with --force works:

[root@rh73-node1:~]# pcs cluster setup --name rhel73 rh73-node1 rh73-node2 --force
Destroying cluster on nodes: rh73-node1, rh73-node2...
rh73-node2: Stopping Cluster (pacemaker)...
rh73-node1: Stopping Cluster (pacemaker)...
rh73-node2: Successfully destroyed cluster
rh73-node1: Successfully destroyed cluster

Sending 'corosync authkey', 'pacemaker_remote authkey' to 'rh73-node1', 'rh73-node2'
rh73-node1: successful distribution of the file 'corosync authkey'
rh73-node1: successful distribution of the file 'pacemaker_remote authkey'
rh73-node2: successful distribution of the file 'corosync authkey'
rh73-node2: successful distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the nodes...
rh73-node1: Succeeded
rh73-node2: Succeeded

Synchronizing pcsd certificates on nodes rh73-node1, rh73-node2...
rh73-node1: Success
rh73-node2: Success
Restarting pcsd on the nodes in order to reload the certificates...
rh73-node1: Success
rh73-node2: Success
[root@rh73-node1:~]# echo $?
0

> adding a node to a stopped cluster works:

[root@rh73-node1:~]# pcs status
Error: cluster is not currently running on this node
[root@rh73-node1:~]# pcs cluster node add rh73-node3
Disabling SBD service...
rh73-node3: sbd disabled
Sending booth configuration to cluster nodes...
rh73-node3: Booth config(s) (booth.conf, booth.key) saved.
Sending 'corosync authkey' to 'rh73-node3'
rh73-node3: successful distribution of the file 'corosync authkey'
Sending remote node configuration files to 'rh73-node3'
rh73-node3: successful distribution of the file 'pacemaker_remote authkey'
rh73-node1: Corosync updated
rh73-node2: Corosync updated
Setting up corosync...
rh73-node3: Succeeded
Synchronizing pcsd certificates on nodes rh73-node3...
rh73-node3: Success
Restarting pcsd on the nodes in order to reload the certificates...
rh73-node3: Success
[root@rh73-node1:~]# echo $?
0

Comment 25 Tomas Jelinek 2017-05-26 12:45:15 UTC
> the authkey is part of backup / restore procedure

[root@rh73-node1:~]# cat /etc/corosync/authkey
c92c34c422808663f15bfc811faf12f84984bc270fc846f31601431d1c47ae3f9658b4a9ac55194503a039f145f6293e90f414d7ff78e54956f3a140c12fe00c8e5441d9ab73d0aeaf88f5d822098f93c8f591f94e27c16aa8626efa5017461d39e4f9cfddf16648636a465110813760044e4de3ee33f33dfdbfa464ce02cc22[root@rh73-node1:~]# 
[root@rh73-node1:~]# cat /etc/pacemaker/authkey
eca4d1834c7619b535a19432e90c3107fa3e9a82d06a513ff33ae32d701d00d4[root@rh73-node1:~]# 

[root@rh73-node1:~]# pcs config backup cluster.tar.bz2
[root@rh73-node1:~]# tar -tf cluster.tar.bz2 | grep authkey
corosync_authkey
pacemaker_authkey

[root@rh73-node1:~]# pcs cluster destroy --all
rh73-node1: Stopping Cluster (pacemaker)...
rh73-node2: Stopping Cluster (pacemaker)...
rh73-node2: Successfully destroyed cluster
rh73-node1: Successfully destroyed cluster
[root@rh73-node1:~]# cat /etc/corosync/authkey 
cat: /etc/corosync/authkey: No such file or directory
[root@rh73-node1:~]# cat /etc/pacemaker/authkey 
cat: /etc/pacemaker/authkey: No such file or directory

[root@rh73-node1:~]# pcs config restore cluster.tar.bz2
rh73-node1: Succeeded
rh73-node2: Succeeded
[root@rh73-node1:~]# cat /etc/corosync/authkey
c92c34c422808663f15bfc811faf12f84984bc270fc846f31601431d1c47ae3f9658b4a9ac55194503a039f145f6293e90f414d7ff78e54956f3a140c12fe00c8e5441d9ab73d0aeaf88f5d822098f93c8f591f94e27c16aa8626efa5017461d39e4f9cfddf16648636a465110813760044e4de3ee33f33dfdbfa464ce02cc22[root@rh73-node1:~]# 
[root@rh73-node1:~]# cat /etc/pacemaker/authkey
eca4d1834c7619b535a19432e90c3107fa3e9a82d06a513ff33ae32d701d00d4[root@rh73-node1:~]#

Comment 27 Ivan Devat 2017-05-30 08:46:18 UTC
There are additional problems:
> flag --skip-offline is ignored

[vm-rhel72-1 ~] $ pcs cluster node add-guest no-host D
Error: Unable to connect to no-host (Could not resolve host: no-host; Name or service not known), use --skip-offline to override
[vm-rhel72-1 ~] $ pcs cluster node add-guest no-host D --skip-offline
Error: Unable to connect to no-host (Could not resolve host: no-host; Name or service not known), use --skip-offline to override

> guest is removed from cib even if the command does not succeeded.

[vm-rhel72-1 ~] $ pcs cluster node remove-guest no-host
Requesting stop of service pacemaker_remote on 'no-host'
Error: Unable to connect to no-host (Could not resolve host: no-host; Name or service not known), use --skip-offline to override
[vm-rhel72-1 ~] $ pcs cluster node remove-guest no-host
Error: guest node 'no-host' does not appear to exist in configuration

Comment 28 Ivan Devat 2017-05-31 12:19:56 UTC
Created attachment 1283759 [details]
proposed fix (part6)

Comment 29 Ivan Devat 2017-05-31 12:22:18 UTC
After Fix:

[vm-rhel72-1 ~] $ rpm -q pcs
pcs-0.9.158-3.el7.x86_64

> flag --skip-offline

[vm-rhel72-1 ~] $ pcs cluster node add-guest no-host D
Error: Unable to connect to no-host (Could not resolve host: no-host; Name or service not known), use --skip-offline to override

[vm-rhel72-1 ~] $ pcs cluster node add-guest no-host D --skip-offline
Warning: Unable to connect to no-host (Could not resolve host: no-host; Name or service not known)
Sending remote node configuration files to 'no-host'
Warning: Unable to connect to no-host (Could not resolve host: no-host; Name or service not known)
Requesting start of service pacemaker_remote on 'no-host'
Warning: Unable to connect to no-host (Could not resolve host: no-host; Name or service not known)

> guest is removed from cib even if the command does not succeeded.

[vm-rhel72-1 ~] $ pcs cluster node remove-guest no-host
Requesting stop of service pacemaker_remote on 'no-host'
Error: Unable to connect to no-host (Could not resolve host: no-host; Name or service not known), use --skip-offline to override

[vm-rhel72-1 ~] $ pcs cluster node remove-guest no-host --skip-offline
Requesting stop of service pacemaker_remote on 'no-host'
Warning: Unable to connect to no-host (Could not resolve host: no-host; Name or service not known)
Requesting remove remote node files from 'no-host'
Warning: Unable to connect to no-host (Could not resolve host: no-host; Name or service not known)

Comment 37 errata-xmlrpc 2017-08-01 18:22:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1958


Note You need to log in before you can comment on or make changes to this bug.