RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1448569 - Tell user there's nothing configured if they try to start a nonexistent cluster
Summary: Tell user there's nothing configured if they try to start a nonexistent cluster
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs
Version: 7.4
Hardware: x86_64
OS: Linux
medium
low
Target Milestone: rc
: ---
Assignee: Ondrej Mular
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-05-05 19:13 UTC by Corey Marthaler
Modified: 2020-09-29 20:10 UTC (History)
6 users (show)

Fixed In Version: pcs-0.9.169-1.el7
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1743731 (view as bug list)
Environment:
Last Closed: 2020-09-29 20:10:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:3964 0 None None None 2020-09-29 20:10:42 UTC

Description Corey Marthaler 2017-05-05 19:13:14 UTC
Description of problem:

[root@mckinley-01 ~]# pcs status
Error: cluster is not currently running on this node


# Proposed behavior
[root@mckinley-01 ~]# pcs cluster start
Error: cluster is not currently configured on this node


# Current behavior
[root@mckinley-01 ~]# pcs cluster start
Starting Cluster...
Job for corosync.service failed because the control process exited with error code. See "systemctl status corosync.service" and "journalctl -xe" for details.
Error: unable to start corosync

[root@mckinley-01 ~]# systemctl status corosync.service
â<97><8f> corosync.service - Corosync Cluster Engine
   Loaded: loaded (/usr/lib/systemd/system/corosync.service; disabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Fri 2017-05-05 13:59:58 CDT; 30s ago
     Docs: man:corosync
           man:corosync.conf
           man:corosync_overview
  Process: 6968 ExecStart=/usr/share/corosync/corosync start (code=exited, status=1/FAILURE)

May 05 13:59:58 mckinley-01.lab.msp.redhat.com systemd[1]: Starting Corosync Cluster Engine...
May 05 13:59:58 mckinley-01.lab.msp.redhat.com corosync[6968]: Starting Corosync Cluster Engine (corosync): [FAILED]
May 05 13:59:58 mckinley-01.lab.msp.redhat.com systemd[1]: corosync.service: control process exited, code=exited status=1
May 05 13:59:58 mckinley-01.lab.msp.redhat.com systemd[1]: Failed to start Corosync Cluster Engine.
May 05 13:59:58 mckinley-01.lab.msp.redhat.com systemd[1]: Unit corosync.service entered failed state.
May 05 13:59:58 mckinley-01.lab.msp.redhat.com systemd[1]: corosync.service failed.



Version-Release number of selected component (if applicable):
pcs-0.9.157-1.el7.x86_64

Comment 2 Ondrej Mular 2019-11-29 08:48:42 UTC
Upstream fix:
https://github.com/ClusterLabs/pcs/commit/f318267d96cbfc3d834f93405e56bf2b403db4ff

Test:
[root@rhel7-node1 pcs]# pcs cluster start
Error: cluster is not currently configured on this node
[root@rhel7-node1 pcs]# echo $?
1

Comment 4 Ivan Devat 2020-04-09 14:35:58 UTC
After Fix

[kid76 ~] $ pcs cluster start
Error: cluster is not currently configured on this node
[kid76 ~] $ echo $?
1

Comment 8 Michal Mazourek 2020-04-14 14:23:53 UTC
BEFORE:
=======

[root@virt-202 ~]# rpm -q pcs
pcs-0.9.168-4.el7.x86_64


[root@virt-202 ~]# pcs cluster corosync
Error: Unable to read /etc/corosync/corosync.conf: No such file or directory

[root@virt-202 ~]# pcs cluster start
Starting Cluster (corosync)...
Job for corosync.service failed because the control process exited with error code. See "systemctl status corosync.service" and "journalctl -xe" for details.
Error: unable to start corosync

[root@virt-202 ~]# echo $?
1

[root@virt-202 ~]# systemctl status corosync.service
● corosync.service - Corosync Cluster Engine
   Loaded: loaded (/usr/lib/systemd/system/corosync.service; disabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Tue 2020-04-14 15:36:15 CEST; 35s ago
     Docs: man:corosync
           man:corosync.conf
           man:corosync_overview
  Process: 27799 ExecStart=/usr/share/corosync/corosync start (code=exited, status=1/FAILURE)

Apr 14 15:36:15 virt-202.cluster-qe.lab.eng.brq.redhat.com systemd[1]: Starting Corosync Cluster Engine...
Apr 14 15:36:15 virt-202.cluster-qe.lab.eng.brq.redhat.com corosync[27806]: Can't read file /etc/corosync/corosync.conf reason = (No such file or directory)
Apr 14 15:36:15 virt-202.cluster-qe.lab.eng.brq.redhat.com corosync[27799]: Starting Corosync Cluster Engine (corosync): [FAILED]
Apr 14 15:36:15 virt-202.cluster-qe.lab.eng.brq.redhat.com systemd[1]: corosync.service: control process exited, code=exited status=1
Apr 14 15:36:15 virt-202.cluster-qe.lab.eng.brq.redhat.com systemd[1]: Failed to start Corosync Cluster Engine.
Apr 14 15:36:15 virt-202.cluster-qe.lab.eng.brq.redhat.com systemd[1]: Unit corosync.service entered failed state.
Apr 14 15:36:15 virt-202.cluster-qe.lab.eng.brq.redhat.com systemd[1]: corosync.service failed.


AFTER:
======

[root@virt-012 ~]# rpm -q pcs
pcs-0.9.169-1.el7.x86_64


[root@virt-012 ~]# pcs cluster corosync
Error: Unable to read /etc/corosync/corosync.conf: No such file or directory

[root@virt-012 ~]# pcs cluster start
Error: cluster is not currently configured on this node

> OK

[root@virt-012 ~]# echo $?
1

[root@virt-012 ~]# systemctl status corosync.service
● corosync.service - Corosync Cluster Engine
   Loaded: loaded (/usr/lib/systemd/system/corosync.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: man:corosync
           man:corosync.conf
           man:corosync_overview

Apr 14 15:48:24 virt-012.cluster-qe.lab.eng.brq.redhat.com corosync[31960]:  [QB    ] withdrawing server sockets
Apr 14 15:48:24 virt-012.cluster-qe.lab.eng.brq.redhat.com corosync[31960]:  [SERV  ] Service engine unloaded: corosync configuration map access
Apr 14 15:48:24 virt-012.cluster-qe.lab.eng.brq.redhat.com corosync[31960]:  [QB    ] withdrawing server sockets
Apr 14 15:48:24 virt-012.cluster-qe.lab.eng.brq.redhat.com corosync[31960]:  [SERV  ] Service engine unloaded: corosync configuration service
Apr 14 15:48:24 virt-012.cluster-qe.lab.eng.brq.redhat.com corosync[31960]:  [QB    ] withdrawing server sockets
Apr 14 15:48:24 virt-012.cluster-qe.lab.eng.brq.redhat.com corosync[31960]:  [SERV  ] Service engine unloaded: corosync cluster closed process group service v1.01
Apr 14 15:48:24 virt-012.cluster-qe.lab.eng.brq.redhat.com corosync[31960]:  [QB    ] withdrawing server sockets
Apr 14 15:48:24 virt-012.cluster-qe.lab.eng.brq.redhat.com corosync[31960]:  [SERV  ] Service engine unloaded: corosync cluster quorum service v0.1
Apr 14 15:48:25 virt-012.cluster-qe.lab.eng.brq.redhat.com corosync[1068]: Waiting for corosync services to unload:.[  OK  ]
Apr 14 15:48:25 virt-012.cluster-qe.lab.eng.brq.redhat.com systemd[1]: Stopped Corosync Cluster Engine.

> log unrelated to the 'pcs cluster start' action, corosync didn't fail


[root@virt-012 ~]# pcs cluster start --wait
Error: cluster is not currently configured on this node

[root@virt-012 ~]# pcs cluster start --wait=10
Error: cluster is not currently configured on this node

[root@virt-012 ~]# pcs cluster start --request-timeout=100
Error: cluster is not currently configured on this node

> OK


Marking as VERIFIED in pcs-0.9.169-1.el7

Comment 10 errata-xmlrpc 2020-09-29 20:10:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (pcs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:3964


Note You need to log in before you can comment on or make changes to this bug.