RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1619818 - RFE: have pcs ask if pcsd is actually running when it gets a "Failed connect to" error
Summary: RFE: have pcs ask if pcsd is actually running when it gets a "Failed connect ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: pcs
Version: 8.0
Hardware: x86_64
OS: Linux
low
low
Target Milestone: rc
: 8.4
Assignee: Tomas Jelinek
QA Contact: cluster-qe@redhat.com
Steven J. Levine
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-08-21 19:56 UTC by Corey Marthaler
Modified: 2021-05-18 15:12 UTC (History)
10 users (show)

Fixed In Version: pcs-0.10.8-1.el8
Doc Type: No Doc Update
Doc Text:
This is just a change of an error message printed by pcs, no doc update is needed.
Clone Of:
Environment:
Last Closed: 2021-05-18 15:12:05 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
proposed fix (3.58 KB, patch)
2021-01-11 13:44 UTC, Tomas Jelinek
no flags Details | Diff

Description Corey Marthaler 2018-08-21 19:56:25 UTC
Description of problem:
At first I thought the network was down, then realized pcsd just wasn't running.

[root@mckinley-01 ~]# systemctl stop pcsd
[root@mckinley-01 ~]# pcs cluster enable --all
Error: unable to enable all nodes
Unable to connect to mckinley-01, try setting higher timeout in --request-timeout option (Failed connect to mckinley-01:2224; Connection refused)
Unable to connect to mckinley-02, try setting higher timeout in --request-timeout option (Failed connect to mckinley-02:2224; Connection refused)
Unable to connect to mckinley-03, try setting higher timeout in --request-timeout option (Failed connect to mckinley-03:2224; Connection refused)

[root@mckinley-01 ~]# systemctl start pcsd
[root@mckinley-01 ~]# pcs cluster enable --all
mckinley-01: Cluster Enabled
mckinley-02: Cluster Enabled
mckinley-03: Cluster Enabled


Version-Release number of selected component (if applicable):
[root@mckinley-01 ~]# rpm -qi pcs
Name        : pcs
Version     : 0.9.165
Release     : 1.el7
Architecture: x86_64
Install Date: Thu 19 Jul 2018 02:14:30 PM CDT
Group       : System Environment/Base
Size        : 16481620
License     : GPLv2
Signature   : RSA/SHA256, Fri 22 Jun 2018 09:01:36 AM CDT, Key ID 199e2f91fd431d51
Source RPM  : pcs-0.9.165-1.el7.src.rpm
Build Date  : Fri 22 Jun 2018 06:50:37 AM CDT

Comment 11 Tomas Jelinek 2021-01-11 13:44:02 UTC
Created attachment 1746272 [details]
proposed fix

Test:

[root@rh83-node1:~]# systemctl stop pcsd
[root@rh83-node1:~]# pcs cluster enable --all
rh83-node2: Cluster Enabled
rh83-node3: Cluster Enabled
Error: unable to enable all nodes
Unable to connect to rh83-node1, check if pcsd is running there or try setting higher timeout with --request-timeout option (Failed to connect to rh83-node1 port 2224: Connection refused)

Comment 12 Miroslav Lisik 2021-02-01 16:40:29 UTC
Test:

[root@r8-node-01 ~]# rpm -q pcs
pcs-0.10.8-1.el8.x86_64

[root@r8-node-01 ~]# systemctl stop pcsd
[root@r8-node-01 ~]# pcs cluster enable --all
r8-node-02: Cluster Enabled
Error: unable to enable all nodes
Unable to connect to r8-node-01, check if pcsd is running there or try setting higher timeout with --request-timeout option (Failed to connect to r8-node-01 port 2224: Connection refused)

Comment 16 Michal Mazourek 2021-02-04 12:37:14 UTC
BEFORE:
=======

[root@virt-031 ~]# rpm -q pcs
pcs-0.10.7-3.el8.x86_64


## Enabling cluster with stopped pcsd

[root@virt-031 ~]# pcs host auth virt-03{1,2} -u hacluster -p password
virt-032: Authorized
virt-031: Authorized
[root@virt-031 ~]# pcs cluster setup test virt-03{1,2}
{...}

[root@virt-031 ~]# systemctl stop pcsd

[root@virt-031 ~]# pcs cluster enable --all
virt-032: Cluster Enabled
Error: unable to enable all nodes
Unable to connect to virt-031, try setting higher timeout in --request-timeout option (Failed to connect to virt-031 port 2224: Connection refused)

> Enable shows error for the first node without a hint about stopped pcsd


## Starting cluster with stopped pcsd

[root@virt-031 ~]# pcs cluster start --all
virt-031: Unable to connect to virt-031, try setting higher timeout in --request-timeout option (Failed to connect to virt-031 port 2224: Connection refused)
virt-032: Starting Cluster...
Error: unable to start all nodes
virt-031: Unable to connect to virt-031, try setting higher timeout in --request-timeout option (Failed to connect to virt-031 port 2224: Connection refused)

> Start shows error for the first node without a hint about stopped pcsd


## Starting and enabling cluster with running pcsd

[root@virt-031 ~]# systemctl start pcsd
[root@virt-031 ~]# pcs cluster start --all
virt-032: Starting Cluster...
virt-031: Starting Cluster...
[root@virt-031 ~]# pcs cluster enable --all
virt-031: Cluster Enabled
virt-032: Cluster Enabled

> OK: No problem with running pcsd


AFTER:
======

[root@virt-058 ~]# rpm -q pcs
pcs-0.10.8-1.el8.x86_64


## Enabling cluster with stopped pcsd

[root@virt-058 ~]# pcs host auth virt-05{8,9} -u hacluster -p password
virt-059: Authorized
virt-058: Authorized
[root@virt-058 ~]# pcs cluster setup test virt-05{8,9}
{...}

[root@virt-058 ~]# systemctl stop pcsd

[root@virt-058 ~]# pcs cluster enable --all
virt-059: Cluster Enabled
Error: unable to enable all nodes
Unable to connect to virt-058, check if pcsd is running there or try setting higher timeout with --request-timeout option (Failed to connect to virt-058 port 2224: Connection refused)

> OK: Enable shows improved error message for the first node with a hint about stopped pcsd - 'check if pcsd is running there'


## Starting cluster with stopped pcsd

[root@virt-058 ~]# pcs cluster start --all
virt-058: Unable to connect to virt-058, check if pcsd is running there or try setting higher timeout with --request-timeout option (Failed to connect to virt-058 port 2224: Connection refused)
virt-059: Starting Cluster...
Error: unable to start all nodes
virt-058: Unable to connect to virt-058, check if pcsd is running there or try setting higher timeout with --request-timeout option (Failed to connect to virt-058 port 2224: Connection refused)

> OK: Start shows improved error message for the first node with a hint about stopped pcsd - 'check if pcsd is running there'


## Starting and enabling cluster with running pcsd

[root@virt-058 ~]# systemctl start pcsd
[root@virt-058 ~]# pcs cluster start --all
virt-059: Starting Cluster...
virt-058: Starting Cluster...
[root@virt-058 ~]# pcs cluster enable --all
virt-058: Cluster Enabled
virt-059: Cluster Enabled

> OK: No problem with running pcsd


Marking as VERIFIED for pcs-0.10.8-1.el8

Comment 19 errata-xmlrpc 2021-05-18 15:12:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (pcs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2021:1737


Note You need to log in before you can comment on or make changes to this bug.