RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1269242 - pcs needs to be able to view status and config on nodes that are not part of any cluster, but have a cib.xml file
Summary: pcs needs to be able to view status and config on nodes that are not part of ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs
Version: 7.3
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: ---
Assignee: Tomas Jelinek
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-10-06 19:02 UTC by Chris Feist
Modified: 2016-11-03 20:55 UTC (History)
6 users (show)

Fixed In Version: pcs-0.9.152-4.el7
Doc Type: Bug Fix
Doc Text:
Cause: User wants to display cluster configuration based on provided cib.xml file using a host which is not part of any cluster. Consequence: pcs fails to display the configuration ending with an error message. Fix: When displaying configuration from provided cib.xml file, do not try to read info from a running cluster. Result: Pcs displays cluster configuration.
Clone Of:
Environment:
Last Closed: 2016-11-03 20:55:33 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
proposed fix (2.52 KB, patch)
2016-07-08 11:48 UTC, Tomas Jelinek
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2016:2596 0 normal SHIPPED_LIVE Moderate: pcs security, bug fix, and enhancement update 2016-11-03 12:11:34 UTC

Description Chris Feist 2015-10-06 19:02:54 UTC
Description of problem:
When running pcs to view a cib file on a node that is not part of a cluster it will give errors and not allow you view the status or config.  When support is debugging issues with an sosreport, they usually only have the cib.xml file (and possibly corosync.conf).  They need to view the info in that file in human readable format through pcs.

Version-Release number of selected component (if applicable):
pcs-0.9.143-9.el7.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Take cib.xml (and optionally corosync.conf) from a working cluster
2.  Run these commands on a node that has pcs installed, but does not have a cluster running (or configured)

[root@host-600 ~]# pcs -f out.txt --corosync_conf=/etc/corosync/corosync.conf status
Cluster name: blah
Error: unable to get list of pacemaker nodes

(On a node with no corosync.conf)
[root@host-600 ~]# pcs -f out.txt  config
Cluster Name: 
Error: Unable to read /etc/corosync/corosync.conf: No such file or directory

This is so people can debug sos report information without having to setup their own cluster, etc.  We may also want to make an alias of the --corosync_conf so it's easier to remember/type.

We can provide an option so pcs knows it's in this mode, maybe '--offline' and pcs with ignore the pacemaker node check and won't die if corosync.conf is missing (just print a warning).

We can also make the pacemaker node list smarter by looking at the cib.xml instead of parsing output of the 'crm_node -l' command.  It may also make sense to see if we can get the 'crm_node -l' command to use the CIB_file environment variable.

Comment 1 Tomas Jelinek 2016-07-07 15:42:56 UTC
See also https://github.com/ClusterLabs/pcs/issues/93

Comment 2 Tomas Jelinek 2016-07-08 11:48:07 UTC
Created attachment 1177634 [details]
proposed fix

Test:

[root@rh72-node1:~]# pcs status
Cluster name: rhel72
Stack: corosync
Current DC: rh72-node2 (version 1.1.15-3.el7-e174ec8) - partition with quorum
Last updated: Fri Jul  8 13:21:16 2016          Last change: Fri Jul  8 13:21:05 2016 by hacluster via crmd on rh72-node2

2 nodes and 3 resources configured

Online: [ rh72-node1 rh72-node2 ]

Full list of resources:

 xvmNode1       (stonith:fence_xvm):    Started rh72-node1
 xvmNode2       (stonith:fence_xvm):    Started rh72-node2
 dummy  (ocf::heartbeat:Dummy): Started rh72-node1

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

[root@rh72-node1:~]# pcs cluster cib > ~/cib.xml

[root@rh72-node1:~]# pcs cluster destroy --all
rh72-node1: Stopping Cluster (pacemaker)...
rh72-node2: Stopping Cluster (pacemaker)...
rh72-node1: Successfully destroyed cluster
rh72-node2: Successfully destroyed cluster

[root@rh72-node1:~]# pcs -f ~/cib.xml status
Stack: corosync
Current DC: rh72-node2 (version 1.1.15-3.el7-e174ec8) - partition with quorum
Last updated: Fri Jul  8 13:23:13 2016          Last change: Fri Jul  8 13:21:05 2016 by hacluster via crmd on rh72-node2

2 nodes and 3 resources configured

Online: [ rh72-node1 rh72-node2 ]

Full list of resources:

 xvmNode1       (stonith:fence_xvm):    Started rh72-node1
 xvmNode2       (stonith:fence_xvm):    Started rh72-node2
 dummy  (ocf::heartbeat:Dummy): Started rh72-node1



This commands for displaying cluster configuration work on a host with no cluster:
pcs -f cib.xml acl
pcs -f cib.xml acl show
pcs -f cib.xml alert
pcs -f cib.xml alert config
pcs -f cib.xml alert show
pcs -f cib.xml cluster status
pcs -f cib.xml cluster cib
pcs -f cib.xml config
pcs -f cib.xml config show
pcs -f cib.xml constraint
pcs -f cib.xml constraint colocation
pcs -f cib.xml constraint colocation show
pcs -f cib.xml constraint list
pcs -f cib.xml constraint location
pcs -f cib.xml constraint location show
pcs -f cib.xml constraint order
pcs -f cib.xml constraint order show
pcs -f cib.xml constraint ref
pcs -f cib.xml constraint show
pcs -f cib.xml constraint ticket
pcs -f cib.xml constraint ticket show
pcs -f cib.xml node utilization
pcs -f cib.xml property
pcs -f cib.xml property list
pcs -f cib.xml property show
pcs -f cib.xml resource
pcs -f cib.xml resource defaults
pcs -f cib.xml resource op defaults
pcs -f cib.xml resource failcount show
pcs -f cib.xml resource show
pcs -f cib.xml resource utilization
pcs -f cib.xml status
pcs -f cib.xml status cluster
pcs -f cib.xml status group
pcs -f cib.xml status resources
pcs -f cib.xml status status
pcs -f cib.xml status xml
pcs -f cib.xml stonith
pcs -f cib.xml stonith level
pcs -f cib.xml stonith show

pcs --corosync_conf corosync.conf cluster corosync
pcs --corosync_conf corosync.conf quorum config

pcs --corosync_conf corosync.conf -f cib.xml status nodes
pcs --corosync_conf corosync.conf -f cib.xml status nodes both
pcs --corosync_conf corosync.conf -f cib.xml status nodes config
pcs --corosync_conf corosync.conf -f cib.xml status nodes corosync

Comment 3 Ivan Devat 2016-07-15 11:25:07 UTC
Setup:
[vm-rhel72-1 ~] $ pcs status
Cluster name: devcluster
Stack: corosync
Current DC: vm-rhel72-1 (version 1.1.15-2.el7-25920db) - partition with quorum
Last updated: Fri Jul 15 13:17:17 2016          Last change: Fri Jul 15 13:05:32 2016 by hacluster via cibadmin on vm-rhel72-1

2 nodes and 3 resources configured

Online: [ vm-rhel72-1 vm-rhel72-3 ]

Full list of resources:

 AA     (ocf::heartbeat:Dummy): Stopped
 BB     (ocf::heartbeat:Dummy): Stopped
 xvm-fencing    (stonith:fence_xvm):    Started vm-rhel72-3

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled
[vm-rhel72-1 ~] $ pcs cluster cib > ~/cib.xml
[vm-rhel72-1 ~] $ pcs cluster destroy --all
vm-rhel72-1: Stopping Cluster (pacemaker)...
vm-rhel72-3: Stopping Cluster (pacemaker)...
vm-rhel72-3: Successfully destroyed cluster
vm-rhel72-1: Successfully destroyed cluster


Before fix:
[vm-rhel72-1 ~] $ rpm -q pcs
pcs-0.9.152-3.el7.x86_64

[vm-rhel72-1 ~] $ pcs -f ~/cib.xml status
Error: unable to get list of pacemaker nodes

After Fix:
[vm-rhel72-1 ~] $ rpm -q pcs
Stack: corosync
Current DC: vm-rhel72-1 (version 1.1.15-2.el7-25920db) - partition with quorum
Last updated: Fri Jul 15 13:20:51 2016          Last change: Fri Jul 15 13:05:32 2016 by hacluster via cibadmin on vm-rhel72-1

2 nodes and 3 resources configured

Online: [ vm-rhel72-1 vm-rhel72-3 ]

Full list of resources:

 AA     (ocf::heartbeat:Dummy): Stopped
 BB     (ocf::heartbeat:Dummy): Stopped
 xvm-fencing    (stonith:fence_xvm):    Started vm-rhel72-3

Comment 8 errata-xmlrpc 2016-11-03 20:55:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2596.html


Note You need to log in before you can comment on or make changes to this bug.