RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1464781 - Unable to add remote/guest node forcibly to a cluster if error with pacemaker_remote daemon occurs on remote/guest node
Summary: Unable to add remote/guest node forcibly to a cluster if error with pacemaker...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs
Version: 7.4
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: ---
Assignee: Ivan Devat
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-06-25 17:07 UTC by Miroslav Lisik
Modified: 2018-04-10 15:40 UTC (History)
6 users (show)

Fixed In Version: pcs-0.9.162-3.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-10 15:39:15 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
proposed fix (17.28 KB, patch)
2017-06-30 14:36 UTC, Tomas Jelinek
no flags Details | Diff
fix for exit code issue (2.31 KB, patch)
2018-01-04 09:49 UTC, Tomas Jelinek
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:0866 0 None None None 2018-04-10 15:40:17 UTC

Description Miroslav Lisik 2017-06-25 17:07:13 UTC
Description of problem:

Unable to add remote/guest node forcibly to a cluster when pacemaker-remote
package is not installed or if some error occurs during enable and start
actions of pacemaker_remote daemon on the remote/geust nodes.


Version-Release number of selected component (if applicable):
pcs-0.9.158-6.el7


How reproducible:
always


Steps to Reproduce:

1. Have a cluster authenticated against remote or guest node.

2. Make sure that package pacemaker-remote is not installed or cause error
somehow during start/enable of pcemaker_remote daemon.

3. Issue the command for adding remote/guest node.


Actual results:

Adding remote node without forcing works as expected:

[root@duck-01 ~]# pcs cluster node add-remote virt-136
Sending remote node configuration files to 'virt-136'
virt-136: successful distribution of the file 'pacemaker_remote authkey'
Requesting start of service pacemaker_remote on 'virt-136'
Error: virt-136: service command failed: pacemaker_remote start: Operation failed., use --force to override
Error: virt-136: service command failed: pacemaker_remote enable: Operation failed., use --force to override
[root@duck-01 ~]# echo $?
1
[root@duck-01 ~]# pcs status nodes | sed -n '/Pacemaker Remote/,$ p'
Pacemaker Remote Nodes:
 Online:
 Standby:
 Maintenance:
 Offline:

Adding remote node with forcing does not work:

[root@duck-01 ~]# pcs cluster node add-remote virt-136 --force
Sending remote node configuration files to 'virt-136'
virt-136: successful distribution of the file 'pacemaker_remote authkey'
Requesting start of service pacemaker_remote on 'virt-136'
Error: virt-136: service command failed: pacemaker_remote start: Operation failed., use --force to override
Error: virt-136: service command failed: pacemaker_remote enable: Operation failed., use --force to override
[root@duck-01 ~]# echo $?
1
[root@duck-01 ~]# pcs status nodes | sed -n '/Pacemaker Remote/,$ p'
Pacemaker Remote Nodes:
 Online:
 Standby:
 Maintenance:
 Offline:

Adding guest node without forcing works as expected:

[root@duck-01 ~]# pcs cluster node add-guest pool-10-34-70-90 guest-01
Sending remote node configuration files to 'pool-10-34-70-90'
pool-10-34-70-90: successful distribution of the file 'pacemaker_remote authkey'
Requesting start of service pacemaker_remote on 'pool-10-34-70-90'
Error: pool-10-34-70-90: service command failed: pacemaker_remote start: Operation failed., use --force to override
Error: pool-10-34-70-90: service command failed: pacemaker_remote enable: Operation failed., use --force to override
[root@duck-01 ~]# echo $?
1
[root@duck-01 ~]# pcs status nodes | sed -n '/Pacemaker Remote/,$ p'
Pacemaker Remote Nodes:
 Online:
 Standby:
 Maintenance:
 Offline:

Adding guest node with forcing does not work:

[root@duck-01 ~]# pcs cluster node add-guest pool-10-34-70-90 guest-01 --force
Sending remote node configuration files to 'pool-10-34-70-90'
pool-10-34-70-90: successful distribution of the file 'pacemaker_remote authkey'
Requesting start of service pacemaker_remote on 'pool-10-34-70-90'
Error: pool-10-34-70-90: service command failed: pacemaker_remote start: Operation failed., use --force to override
Error: pool-10-34-70-90: service command failed: pacemaker_remote enable: Operation failed., use --force to override
[root@duck-01 ~]# echo $?
1
[root@duck-01 ~]# pcs status nodes | sed -n '/Pacemaker Remote/,$ p'
Pacemaker Remote Nodes:
 Online:
 Standby:
 Maintenance:
 Offline:


Expected results:

When remote/guest node is added forcibly, node is always added to a cluster
despite of failed actions on the remote/guest node.


Additional info:

Possible workaround: to use --skip-offline option, but it is not suggested in the error message.

[root@duck-01 ~]# pcs cluster node add-remote virt-136 --skip-offline
Sending remote node configuration files to 'virt-136'
virt-136: successful distribution of the file 'pacemaker_remote authkey'
Requesting start of service pacemaker_remote on 'virt-136'
Warning: virt-136: service command failed: pacemaker_remote start: Operation failed.
Warning: virt-136: service command failed: pacemaker_remote enable: Operation failed.
[root@duck-01 ~]# echo $?
0
[root@duck-01 ~]# pcs status nodes | sed -n '/Pacemaker Remote/,$ p'
Pacemaker Remote Nodes:
 Online:
 Standby:
 Maintenance:
 Offline: virt-136

[root@duck-01 ~]# pcs cluster node add-guest pool-10-34-70-90 guest-01 --skip-offline
Sending remote node configuration files to 'pool-10-34-70-90'
pool-10-34-70-90: successful distribution of the file 'pacemaker_remote authkey'
Requesting start of service pacemaker_remote on 'pool-10-34-70-90'
Warning: pool-10-34-70-90: service command failed: pacemaker_remote start: Operation failed.
Warning: pool-10-34-70-90: service command failed: pacemaker_remote enable: Operation failed.
[root@duck-01 ~]# echo $?
0
[root@duck-01 ~]# pcs status nodes | sed -n '/Pacemaker Remote/,$ p'
Pacemaker Remote Nodes:
 Online:
 Standby:
 Maintenance:
 Offline: pool-10-34-70-90 virt-136

Comment 2 Tomas Jelinek 2017-06-30 14:36:42 UTC
Created attachment 1293252 [details]
proposed fix

Comment 4 Ivan Devat 2017-10-11 08:01:21 UTC
After Fix:

[vm-rhel72-1 ~] $ rpm -q pcs
pcs-0.9.160-1.el7.x86_64

[vm-rhel72-1 ~] $ pcs cluster node add-remote vm-rhel72-2 --force
Sending remote node configuration files to 'vm-rhel72-2'
vm-rhel72-2: successful distribution of the file 'pacemaker_remote authkey'
Requesting start of service pacemaker_remote on 'vm-rhel72-2'
Warning: vm-rhel72-2: service command failed: pacemaker_remote enable: Operation failed.
Warning: vm-rhel72-2: service command failed: pacemaker_remote start: Operation failed.

[vm-rhel72-1 ~] $ echo $?
0

Comment 7 Tomas Jelinek 2018-01-04 09:49:11 UTC
Created attachment 1376747 [details]
fix for exit code issue

Comment 8 Ivan Devat 2018-01-08 08:59:19 UTC
After Fix

[ant ~] $ rpm -q pcs
pcs-0.9.162-3.el7.x86_64

[ant ~] $ pcs cluster node add-guest cat REMOTE-NODE
Sending remote node configuration files to 'cat'
cat: successful distribution of the file 'pacemaker_remote authkey'
Requesting start of service pacemaker_remote on 'cat'
Error: cat: service command failed: pacemaker_remote enable: Operation failed.
Error: cat: service command failed: pacemaker_remote start: Operation failed.
Error: Errors have occurred, therefore pcs is unable to continue
[ant ~] $ echo $?
1

Comment 13 errata-xmlrpc 2018-04-10 15:39:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0866


Note You need to log in before you can comment on or make changes to this bug.