RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1474747 - Confusing error message in both CLI and GUI after attempt to create cluster with node already in use
Summary: Confusing error message in both CLI and GUI after attempt to create cluster w...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs
Version: 7.4
Hardware: x86_64
OS: Linux
medium
unspecified
Target Milestone: rc
: ---
Assignee: Tomas Jelinek
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-07-25 10:23 UTC by Monika Muzikovska
Modified: 2019-08-06 13:10 UTC (History)
6 users (show)

Fixed In Version: pcs-0.9.167-1.el7
Doc Type: Bug Fix
Doc Text:
.Users no longer advised to destroy clusters when creating new clusters with nodes from existing clusters Previously, when a user specified nodes from an existing cluster when running the `pcs cluster setup` command or when creating a cluster with the `pcsd` Web UI, pcs reported that as an error and suggested that the user destroy the cluster on the nodes. As a result, users would destroy the cluster on the nodes, breaking the cluster the nodes were part of as the remaining nodes would still consider the destroyed nodes to be part of the cluster. With this fix, users are instead advised to remove nodes from their cluster, better informing them of how to address the issue without breaking their clusters.
Clone Of:
: 1596050 (view as bug list)
Environment:
Last Closed: 2019-08-06 13:10:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
proposed fix (3.30 KB, patch)
2018-11-22 09:01 UTC, Tomas Jelinek
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2244 0 None None None 2019-08-06 13:10:12 UTC

Internal Links: 1722140

Description Monika Muzikovska 2017-07-25 10:23:50 UTC
Description of problem:
Attempt to create cluster including node from another cluster leads to confusing error message in pcs GUI. Following advice in this message causes faulty state of clusters.

Version-Release number of selected component (if applicable):
0.9.158

How reproducible:
Always.

Steps to Reproduce:
1. Cluster1 consists of nodes A and B, Cluster2 consists of nodes C and D.
2. Go to web GUI of pcsd on node A.
3. Create new cluster including node C.

Actual results:
Error message appears:
Unable to create new cluster. If cluster already exists on one or more of the nodes run 'pcs cluster destroy' on all nodes to remove current cluster configuration.
Error: nodes availability check failed, use --force to override.

Expected results:
It is better to recommend 'pcs cluster node remove' on nodes, which are going to be used in new cluster.

Recommended option --force is not available in pcsd GUI, thus it shouldn't be mentioned in the message.

Additional info:

Comment 4 Tomas Jelinek 2018-11-22 09:01:30 UTC
Created attachment 1507892 [details]
proposed fix

Comment 6 Ivan Devat 2019-03-21 10:03:24 UTC
After Fix:

[kid76 ~] $ rpm -q pcs
pcs-0.9.167-1.el7.x86_64

Cluster1 consists of nodes kid76 lion76. Node mule76 is not part of a cluster.

> in cmdline
[mule76 ~] $ pcs cluster setup --name=test kid76 lion76 mule76
Error: kid76: node is already in a cluster
Error: lion76: node is already in a cluster
Error: nodes availability check failed, use --force to override. WARNING: This will destroy existing cluster on the nodes. You should remove the nodes from their clusters instead to keep the clusters working properly.

> in webUI
* Go to web GUI of pcsd on node mule76.
* Create new cluster including nodes kid76 lion76 mule76.
* An error appears with message:
Unable to create new cluster. If one or more of the nodes belong to a cluster already, remove such nodes from their clusters. If you are sure the nodes are not a part of any cluster, run 'pcs cluster destroy' on such nodes to remove current cluster configuration.

kid76:
Error: kid76: node is already in cluster
Error: lion76: node is already in cluster
Error: nodes availability check failed

Comment 11 errata-xmlrpc 2019-08-06 13:10:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2244


Note You need to log in before you can comment on or make changes to this bug.