Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1163682

Summary: nodes authentication stops if failed on one node
Product: Red Hat Enterprise Linux 7 Reporter: Tomas Jelinek <tojeline>
Component: pcsAssignee: Ondrej Mular <omular>
Status: CLOSED ERRATA QA Contact: cluster-qe <cluster-qe>
Severity: low Docs Contact:
Priority: low    
Version: 7.1CC: cfeist, cluster-maint, john, omular, rsteiger, tojeline
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: pcs-0.9.140-1.el7 Doc Type: Bug Fix
Doc Text:
Cause: User tries to authenticate few nodes and one of them is offline. Consequence: Command exits on first failed authentication. Fix: Command 'pcs cluster auth' now continue on left nodes regardless of failure on some nodes. Result: Pcs tries to auth all nodes regardless of failure on some nodes.
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-11-19 09:33:34 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Tomas Jelinek 2014-11-13 09:43:26 UTC
Description of problem:
When a node is offline / unreachable during nodes authentication pcs will not authenticate against other nodes.

Version-Release number of selected component (if applicable):
pcs-0.9.135

How reproducible:
always

Steps to Reproduce:
1. disconnect / shutdown a node
2. run pcs cluster auth with the offline node in the middle of a node list

Actual results:
pcs exits when trying to authenticate against the offline node
# pcs cluster auth rh70-node1 rh70-node3 rh70-node2 --force
Username: hacluster
Password: 
rh70-node1: Authorized
Error: unable to connect to pcsd on rh70-node3
Unable to connect to rh70-node3 ([Errno 113] No route to host)

Expected results:
pcs tries to authenticate against all nodes regardless of failure on one node
# pcs cluster auth rh70-node1 rh70-node3 rh70-node2 --force
Username: hacluster
Password: 
rh70-node1: Authorized
Error: unable to connect to pcsd on rh70-node3
Unable to connect to rh70-node3 ([Errno 113] No route to host)
rh70-node2: Authorized

Comment 3 Ondrej Mular 2015-05-04 08:32:28 UTC
Test:

1. Try to authenticate nodes (first of them is offline)
# pcs cluster auth node1 node2 node3 --force
Username: hacluster
Password: 
Error: node1: Unable to connect to pcsd: Unable to connect to node1 ([Errno 113] No route to host)
node2: Authorized
node3: Authorized

Comment 4 Tomas Jelinek 2015-06-04 14:31:49 UTC
Before Fix:
[root@rh71-node1 ~]# rpm -q pcs
pcs-0.9.137-13.el7_1.2.x86_64
[root@rh71-node1:~]# pcs cluster auth rh71-node1 rh71-node2 rh71-node3
Username: hacluster
Password: 
rh71-node1: Authorized
Error: unable to connect to pcsd on rh71-node2
Unable to connect to rh71-node2 ([Errno 111] Connection refused)
[root@rh71-node1:~]# echo $?
1



After Fix:
[root@rh71-node1:~]# rpm -q pcs
pcs-0.9.140-1.el6.x86_64
[root@rh71-node1:~]# pcs cluster auth rh71-node1 rh71-node2 rh71-node3
Username: hacluster
Password: 
rh71-node3: Authorized
Error: Unable to communicate with rh71-node2
rh71-node1: Authorized
[root@rh71-node1:~]# echo $?
1

Comment 6 John Jelinek 2015-07-31 17:55:08 UTC
How do I acquire the fix?

Comment 7 Chris Feist 2015-08-05 21:59:12 UTC
The fixes will be in a future Red Hat Linux release or you can use the patch from comment #1 to build your own custom package.

Comment 10 errata-xmlrpc 2015-11-19 09:33:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-2290.html