RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1374175 - "crm_node -n" needs to return the right name on remote nodes
Summary: "crm_node -n" needs to return the right name on remote nodes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pacemaker
Version: 7.3
Hardware: All
OS: Linux
medium
low
Target Milestone: rc
: 7.6
Assignee: Ken Gaillot
QA Contact: Patrik Hagara
URL:
Whiteboard:
: 1327302 (view as bug list)
Depends On:
Blocks: 1290512 1477664
TreeView+ depends on / blocked
 
Reported: 2016-09-08 07:46 UTC by Tomas Jelinek
Modified: 2021-06-10 11:31 UTC (History)
15 users (show)

Fixed In Version: pacemaker-1.1.19-2.el7
Doc Type: Bug Fix
Doc Text:
Cause: Pacemaker had no way to report the node name of a Pacemaker Remote node to a tool executed on that node's command line. Consequence: If a Pacemaker Remote's node name were different from its local hostname, tools like crm_node would incorrectly report the hostname as the node name, when run from that node's command line. Fix: A new cluster daemon request provides the local node name to any requesting tool. Result: crm_node, and tools that use it such as crm_standby and crm_failcount, now correctly report the local node name, even when run from the command line of a Pacemaker Remote node whose node name is different from its local hostname.
Clone Of: 1290512
Environment:
Last Closed: 2018-10-30 07:57:39 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1327302 0 unspecified CLOSED Some pcs commands fail when run on Pacemaker Remote nodes 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1388398 1 None None None 2021-01-20 06:05:38 UTC
Red Hat Knowledge Base (Solution) 3255831 0 None None None 2018-07-26 19:07:17 UTC
Red Hat Product Errata RHBA-2018:3055 0 None None None 2018-10-30 07:59:13 UTC

Internal Links: 1327302 1388398

Description Tomas Jelinek 2016-09-08 07:46:48 UTC
+++ This bug was initially created as a clone of Bug #1290512 +++

Description of problem: pcs doesn't support putting Pacemaker Remote nodes into standby mode


Version-Release number of selected component (if applicable): 0.9.143-15.el7


How reproducible: Consistently/easily


Steps to Reproduce:
1. Create a Pacemaker cluster.
2. Configure a Pacemaker Remote node in the cluster.
3. Try to put the Pacemaker Remote node into standby mode using "pcs cluster standby <nodename>".

Actual results: Error message: "Error: node '<nodename>' does not appear to exist in configuration"

Expected results: Node is put into standby mode


Additional info: "crm_standby --node <nodename> -v on" works, and can be used as a workaround


--- snip ---


--- Additional comment from Radek Steiger on 2016-09-02 11:30:54 EDT ---

BUG summary:

When using the standby/unstandby/maintenance/unmaintenance command locally on a remote node without specifying the target node (i.e. defaults to local node), the command either fails with an error or does nothing:


[root@virt-131 ~]# pcs cluster standby
Error: Could not map name=virt-131.cluster-qe.lab.eng.brq.redhat.com to a UUID
[root@virt-131 ~]# pcs status nodes | grep virt-131
 Online: virt-131

[root@virt-131 ~]# pcs cluster unstandby
[root@virt-131 ~]# pcs status nodes | grep virt-131
 Standby: virt-131

[root@virt-131 ~]# pcs node maintenance
Error: Unable to put current node to maintenance mode: Could not map name=virt-131.cluster-qe.lab.eng.brq.redhat.com to a UUID

[root@virt-131 ~]# pcs node unmaintenance
Error: Unable to remove current node from maintenance mode: Could not map name=virt-131.cluster-qe.lab.eng.brq.redhat.com to a UUID

--- Additional comment from Radek Steiger on 2016-09-02 11:33:27 EDT ---

Note: This might be due to an attempt to guess the node ID from the machine's FQDN while the cluster uses a different identifier (simple or alternate hostname, etc...).

--- Additional comment from Tomas Jelinek on 2016-09-07 10:22:58 EDT ---

It took me a while to figure this out because it has been working just fine for me. This is what is going on:

Let's have three nodes with FQDNs rh72-node1, rh72-node2 and rh72-node3. The first and second are full-fledged nodes and the third one is used as a remote node.

If the remote node has been created like this
# pcs resource create rh72-node3 ocf:pacemaker:remote
everything works.

However it does not work if the node has been created like this
# pcs resource create remote-node3 ocf:pacemaker:remote server=rh72-node3

This is what happens:
[root@rh72-node3:~]# pcs node standby --debug
Running: /usr/sbin/crm_standby -v on
Finished running: /usr/sbin/crm_standby -v on
Return value: 1
--Debug Output Start--
Could not map name=rh72-node3 to a UUID
--Debug Output End--
Error: Could not map name=rh72-node3 to a UUID

This works perfectly fine on a full-fledged node but obviously not on remotes. OK, so we need to fill in the node name in pcs to make it work. Let's test it manually beforehand to see if it works:
[root@rh72-node3:~]# crm_node -n
rh72-node3
[root@rh72-node3:~]# crm_standby -v on -N rh72-node3
Could not map name=rh72-node3 to a UUID
[root@rh72-node3:~]# echo $?
1
[root@rh72-node3:~]# crm_mon -1bD
Online: [ rh72-node1 rh72-node2 ]
RemoteOnline: [ remote-node3 ]

Active resources:
<snipped>

This does not seem to be the correct way to get remote node name. Maybe we can get node id and then the name for the id:
[root@rh72-node3:~]# crm_node -i
[root@rh72-node3:~]# echo $?
1
No, that does not work either.

If I put the right node name, it works:
[root@rh72-node3:~]# crm_standby -v on -N remote-node3
[root@rh72-node3:~]# echo $?
0
[root@rh72-node3:~]# crm_mon -1bD
RemoteNode remote-node3: standby
Online: [ rh72-node1 rh72-node2 ]

Active resources:
<snipped>

But how to get the right name? The only other option I can think about is to look for an ocf:pacemaker:remote resource with server=<output of cmr_node -n>. It feels a little bit clumsy to me because there are other ways to create remote nodes (e.g. by remote-node meta attribute) and we would need to check them all. I would like getting the remote node name directly from pacemaker much better. The best would be when the name could be completely omitted as it is with full-fledged nodes.


I could really use some help from the pacemaker team here.

--- Additional comment from Ken Gaillot on 2016-09-07 17:59:21 EDT ---

Ah, I didn't think about the node name vs uname issue.

We've had this sort of thing come up before, and I think it will require some changes on pacemaker's side. "crm_node -n" needs to return the right name on remote nodes, and crm_attribute (which crm_standby is just a wrapper for) needs to determine the local node name properly. So I suppose we need to clone this bz for pacemaker.

By the way, full cluster nodes can have a node name different from their uname, too (via "name:" in corosync.conf). I'm guessing crm_standby and crm_node detect the correct name in that case, so pcs doesn't have the same problem there.

Comment 1 Tomas Jelinek 2016-09-08 08:30:39 UTC
Pcs currently uses "crm_node --cluster-id" and "crm_node --name-for-id" to get a local node name instead of "crm_node -n". It is because "crm_node -n" returns node's hostname when a cluster is not running on the node.

Here is the function:
https://github.com/ClusterLabs/pcs/blob/2439c263cad6952c12f3a4fe73db6656a7094a1b/pcs/lib/pacemaker.py#L210
The function is intended to get pacemaker's name of the local node, so it raises an exception if pacemaker is not running. That is completely OK for our use case - we either get the name or we know pacemaker is not running.

So we would like to have "crm_node --cluster-id" and "crm_node --name-for-id" working on remote nodes as well.

Comment 2 Ken Gaillot 2016-09-09 16:33:56 UTC
This will have to be addressed in the 7.4 timeframe

Comment 3 Ken Gaillot 2017-01-10 22:08:10 UTC
This will not be ready in the 7.4 timeframe

Comment 4 Ken Gaillot 2017-02-01 18:39:12 UTC
*** Bug 1417936 has been marked as a duplicate of this bug. ***

Comment 5 Klaus Wenninger 2017-02-08 08:21:22 UTC
When thinking of solutions for this issue I wanted to raise attention
that sbd is another instance that might benefit from a way to
ask the node-name on remote nodes.
Having sbd with the shared-block-device(s) disabled (as up to 7.3)
it is simpler as just the pacemaker-watcher needs this info to
check the cib for the state of the remote node.
So it is fine if the name is available just after the remote-node
is connected by a cluster-node.
With shared-block-device(s) enabled the node-name is needed to
occupy the correct slot on the device(s) as well.
So it would of course be nice to have the info right at the start
of sbd. Dreaming is allowed ;-)
Of course there are ways to work around this issue as giving
the node-name in the sbd-config-file (already available and the
way to do it at the moment), occupy a 2nd slot once
pacemaker-remote is connected, use pcmk_host_map to fence
remote-nodes, ...
The latter 2 would use a mechanisms to query the host-name via
pacemaker-remote for the pacemaker-watcher - something that 
could as well be used by crm_node - and would thus avoid the 
need for a node-name to be configured in the sbd-config.

Just wanted to state that there are issues with sbd in general
on remote-nodes why we don't officially support it there.
The thought above might serve as a piece of the puzzle making
it smooth enough to be supported.

Comment 6 Jan Pokorný [poki] 2017-02-08 09:26:10 UTC
I'd like to rise another point, when pcs finally do not depend on
"pacemaker" package (which unnecessarily pulls in also corosync on
remote node where you want pcs installed, see also [bug 1388398]),
there's no crm_node utility whatsoever.  Then it would be wise to
move crm_node over to -cli package.  That being said, pcs already
expects crm_node to be present, see reopened [bug 1327302], and
it's questionable if pcs can make do without it on remote nodes.
Tomáš can comment more on this topic.

Comment 7 Tomas Jelinek 2017-02-08 09:52:00 UTC
Pcs indeed relies on crm_node to be present on remote nodes. Pcs uses it to figure out the local node name. That is needed for example in commands "pcs node standby" and "pcs node maintenance" when no node is specified. This was discussed in bz1290512 from which this bz was cloned and is summarized here in comment 0.

Comment 8 Tomas Jelinek 2017-02-08 10:03:26 UTC
*** Bug 1327302 has been marked as a duplicate of this bug. ***

Comment 9 Klaus Wenninger 2017-02-08 10:32:31 UTC
Just for completeness: 
All occurrences of crm_node within pcs pacemaker to be running
(e.g. an lrmd-instance, pacemaker-remote or anything)!?

Comment 10 Ken Gaillot 2017-02-08 17:41:02 UTC
(In reply to Jan Pokorný from comment #6)
> I'd like to rise another point, when pcs finally do not depend on
> "pacemaker" package (which unnecessarily pulls in also corosync on
> remote node where you want pcs installed, see also [bug 1388398]),
> there's no crm_node utility whatsoever.  Then it would be wise to
> move crm_node over to -cli package.

To clarify, this is already a goal that depends on this bz. crm_node is not in the -cli package precisely because it requires the -cluster-libs package, and we do not want -cli to depend on that. The same is true of crm_attribute. If that dependency can be removed, those tools will be moved to -cli.

Comment 12 Ken Gaillot 2017-10-18 22:31:21 UTC
This will not make it in time for 7.5

Comment 13 Ken Gaillot 2018-06-18 22:55:02 UTC
Fixed upstream as of pull request https://github.com/ClusterLabs/pacemaker/pull/1515

To summarize the final implementation:

crm_node -n/--name, -N/--name-for-id, and -i/--cluster-id now work on full cluster nodes and Pacemaker Remote nodes, whether or not their name in the cluster matches their local hostname, and whether or not they are called from a resource agent or manually. (Note that --name-for-id is intended to be useful only for full cluster nodes, as remote nodes do not have a corosync id.)

The crm_node commands will now return an error if the cluster is not running.

Not relevant to RHEL, but for completeness: the upstream fix for the 1.1 series fixes -i/--cluster-id for the corosync 2+ stack only (-n and -N are fixed for all stacks). The upstream fix for the 2.0 series additionally fixes -q/--quorum and -R/--remove.

Also for completeness' sake: the crm_standby and crm_failcount tools both default to "crm_node -n" if no node is explicitly specified, so they are also fixed by this.

Comment 15 Ken Gaillot 2018-07-06 02:19:08 UTC
The latest build fixes one regression in the original:

With the original fix, if a resource agent called "crm_node -n" (or indirectly via the ocf_local_nodename function) for its meta-data action, the meta-data action would time out when called by the cluster, because the node name was not passed for meta-data actions, causing a deadlock between the agent and the cluster.

With the latest build, the node name is passed to meta-data actions, so they succeed as usual when called by the cluster.

Comment 16 Patrik Hagara 2018-08-21 12:21:05 UTC
before:
=======

> [root@virt-148 ~]# rpm -q pacemaker
> pacemaker-1.1.18-12.el7.x86_64
> [root@virt-148 ~]# ssh virt-149 rpm -q pacemaker-remote
> pacemaker-remote-1.1.18-12.el7.x86_64
> [root@virt-148 ~]# pcs status
> Cluster name: bzzt
> Stack: corosync
> Current DC: virt-148.cluster-qe.lab.eng.brq.redhat.com (version 1.1.18-12.el7-2b07d5c5a9) - partition with quorum
> Last updated: Tue Aug 21 13:57:37 2018
> Last change: Tue Aug 21 13:52:10 2018 by root via cibadmin on virt-148.cluster-qe.lab.eng.brq.redhat.com
> 
> 1 node configured
> 0 resources configured
> 
> Online: [ virt-148.cluster-qe.lab.eng.brq.redhat.com ]
> 
> No resources
> 
> 
> Daemon Status:
>   corosync: active/enabled
>   pacemaker: active/enabled
>   pcsd: active/enabled
> [root@virt-148 ~]# pcs cluster node add-remote virt-149.cluster-qe.lab.eng.brq.redhat.com my-remote-node
> Sending remote node configuration files to 'virt-149.cluster-qe.lab.eng.brq.redhat.com'
> virt-149.cluster-qe.lab.eng.brq.redhat.com: successful distribution of the file 'pacemaker_remote authkey'
> Requesting start of service pacemaker_remote on 'virt-149.cluster-qe.lab.eng.brq.redhat.com'
> virt-149.cluster-qe.lab.eng.brq.redhat.com: successful run of 'pacemaker_remote enable'
> virt-149.cluster-qe.lab.eng.brq.redhat.com: successful run of 'pacemaker_remote start'
> [root@virt-148 ~]# pcs status
>   ...
> 2 nodes configured
> 1 resource configured
> 
> Online: [ virt-148.cluster-qe.lab.eng.brq.redhat.com ]
> RemoteOnline: [ my-remote-node ]
> 
> Full list of resources:
> 
>  my-remote-node	(ocf::pacemaker:remote):	Started virt-148.cluster-qe.lab.eng.brq.redhat.com
>   ...
> [root@virt-148 ~]# ssh virt-149 crm_node -n
> virt-149.cluster-qe.lab.eng.brq.redhat.com
> [root@virt-148 ~]# echo $?
> 0
> [root@virt-148 ~]# ssh virt-149 crm_node -i
> [root@virt-148 ~]# echo $?
> 1
> [root@virt-148 ~]# ssh virt-149 pcs cluster standby
> Error: unable to get local node name from pacemaker: node id not found
> [root@virt-148 ~]# echo $?
> 1
> [root@virt-148 ~]# pcs status
>   ...
> Online: [ virt-148.cluster-qe.lab.eng.brq.redhat.com ]
> RemoteOnline: [ my-remote-node ]
>   ...
> [root@virt-148 ~]# ssh virt-149 pcs node maintenance
> Error: unable to get local node name from pacemaker: node id not found
> [root@virt-148 ~]# echo $?
> 1
> [root@virt-148 ~]# pcs status
>   ...
> Online: [ virt-148.cluster-qe.lab.eng.brq.redhat.com ]
> RemoteOnline: [ my-remote-node ]
>   ...
> [root@virt-148 ~]# ssh virt-149 pcs cluster standby my-remote-node
> [root@virt-148 ~]# echo $?
> 0
> [root@virt-148 ~]# pcs status
>   ...
> RemoteNode my-remote-node: standby
> Online: [ virt-148.cluster-qe.lab.eng.brq.redhat.com ]
>   ...

Before the fix, a remote node configured with a node name different from its hostname was unable to determine its correct cluster node name and ID. Consequently, it was not possible to put such a remote node into standby or maintenance mode or take it out from such mode by running the appropriate command on the remote node itself without passing the correct node name as an argument. Passing the correct node name as an argument to the (un)standby/(un)maintenance commands successfully worked around the issue.


after:
======

> [root@virt-136 ~]# rpm -q pacemaker
> pacemaker-1.1.19-7.el7.x86_64
> [root@virt-136 ~]# ssh virt-138 rpm -q pacemaker-remote
> pacemaker-remote-1.1.19-7.el7.x86_64
> [root@virt-136 ~]# pcs status
>   ...
> 1 node configured
> 0 resources configured
> 
> Online: [ virt-136.cluster-qe.lab.eng.brq.redhat.com ]
> 
> No resources
>   ...
> [root@virt-136 ~]# pcs cluster node add-remote virt-138.cluster-qe.lab.eng.brq.redhat.com my-remote-node
> Sending remote node configuration files to 'virt-138.cluster-qe.lab.eng.brq.redhat.com'
> virt-138.cluster-qe.lab.eng.brq.redhat.com: successful distribution of the file 'pacemaker_remote authkey'
> Requesting start of service pacemaker_remote on 'virt-138.cluster-qe.lab.eng.brq.redhat.com'
> virt-138.cluster-qe.lab.eng.brq.redhat.com: successful run of 'pacemaker_remote enable'
> virt-138.cluster-qe.lab.eng.brq.redhat.com: successful run of 'pacemaker_remote start'
> [root@virt-136 ~]# pcs status
>   ...
> Online: [ virt-136.cluster-qe.lab.eng.brq.redhat.com ]
> RemoteOnline: [ my-remote-node ]
> 
> Full list of resources:
> 
>  my-remote-node	(ocf::pacemaker:remote):	Started virt-136.cluster-qe.lab.eng.brq.redhat.com
>   ...
> [root@virt-136 ~]# ssh virt-138 crm_node -n
> my-remote-node
> [root@virt-136 ~]# ssh virt-138 crm_node -i
> my-remote-node
> [root@virt-136 ~]# ssh virt-138 pcs cluster standby
> [root@virt-136 ~]# pcs status
>   ...
> RemoteNode my-remote-node: standby
> Online: [ virt-136.cluster-qe.lab.eng.brq.redhat.com ]
>   ...
> [root@virt-136 ~]# ssh virt-138 pcs cluster unstandby
> [root@virt-136 ~]# pcs status
>   ...
> Online: [ virt-136.cluster-qe.lab.eng.brq.redhat.com ]
> RemoteOnline: [ my-remote-node ]
>   ...
> [root@virt-136 ~]# ssh virt-138 pcs node maintenance
> [root@virt-136 ~]# pcs status
>   ...
> RemoteNode my-remote-node: maintenance
> Online: [ virt-136.cluster-qe.lab.eng.brq.redhat.com ]
>   ...
> [root@virt-136 ~]# ssh virt-138 pcs node unmaintenance
> [root@virt-136 ~]# pcs status
>   ...
> Online: [ virt-136.cluster-qe.lab.eng.brq.redhat.com ]
> RemoteOnline: [ my-remote-node ]
>   ...

After the fix, remote nodes are able to correctly determine their name.

Marking verified in pacemaker-1.1.19-7.el7.

Comment 18 errata-xmlrpc 2018-10-30 07:57:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3055


Note You need to log in before you can comment on or make changes to this bug.