RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2082914 - sos collect fails to get node list from a pacemaker cluster
Summary: sos collect fails to get node list from a pacemaker cluster
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: sos
Version: 9.1
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Pavel Moravec
QA Contact: Miroslav Hradílek
URL:
Whiteboard:
: 2097673 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-05-08 13:53 UTC by Pavel Moravec
Modified: 2022-11-15 12:54 UTC (History)
7 users (show)

Fixed In Version: sos-4.3-2.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-11-15 11:12:29 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github sosreport sos pull 2891 0 None Merged [pacemaker] Update collect cluster profile for pacemaker 2022-05-08 13:53:24 UTC
Red Hat Issue Tracker RHELPLAN-121337 0 None None None 2022-05-08 13:54:33 UTC
Red Hat Product Errata RHEA-2022:8275 0 None None None 2022-11-15 11:12:42 UTC

Description Pavel Moravec 2022-05-08 13:53:24 UTC
This bug was initially created as a copy of Bug #2065805

I am copying this bug because: 

we need to be in par between 8.7 and 9.1


Description of problem:
Looks like there is a bug on getting list of nodes with `pcs status`. It appears we returning NONE from `pcs status` output so fails to get list of nodes. 
 
I am able to get pcs status:
#  pcs status
Cluster name: rhel8cluster
Cluster Summary:
  * Stack: corosync
  * Current DC: rhel8-1.examplerh.com (version 2.1.0-8.el8-7c3f660707) - partition with quorum
  * Last updated: Fri Mar 18 14:06:51 2022
  * Last change:  Fri Mar 18 13:59:29 2022 by root via cibadmin on rhel8-1.examplerh.com
  * 3 nodes configured
  * 3 resource instances configured (2 DISABLED)
 
Node List:
  * Online: [ rhel8-1.examplerh.com rhel8-2.examplerh.com rhel8-3.examplerh.com ]
 
Full List of Resources:
  * d-01	(ocf::pacemaker:Dummy):	 Stopped (disabled)
  * d-02	(ocf::pacemaker:Dummy):	 Stopped (disabled)
  * virtfence_xvm	(stonith:fence_xvm):	 Started rhel8-1.examplerh.com
 
Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled
 
-----------
 
Fails to collect sosreports because it cannot get list of nodes.
# sos collect -o corosync,pacemaker -vvvvv --all-logs --case-id=999999  --batch
[sos_collector:__init__] Executing /usr/sbin/sos collect -o corosync,pacemaker -vvvvv --all-logs --case-id=999999 --batch
[sos_collector:__init__] Found cluster profiles: dict_keys(['jbon', 'kubernetes', 'ocp', 'ovirt', 'rhhi_virt', 'rhv', 'pacemaker', 'satellite'])
 
sos-collector (version 4.1)
 
This utility is used to collect sosreports from multiple nodes simultaneously.
It uses OpenSSH's ControlPersist feature to connect to nodes and run commands
remotely. If your system installation of OpenSSH is older than 5.6, please
upgrade.
 
An archive of sosreport tarballs collected from the nodes will be generated in
/var/tmp/sos.kntlhfn6 and may be provided to an appropriate support
representative.
 
The generated archive may contain data considered sensitive and its content
should be reviewed by the originating organization before being passed to any
third party.
 
No configuration changes will be made to the system running this utility or
remote systems that it connects to.
 
[sos_collector:configure_sos_cmd] Initial sos cmd set to sosreport --batch --case-id=999999 --all-logs -c auto
[sos_collector:prep] password not specified, assuming SSH keys
sos-collector ASSUMES that SSH keys are installed on all nodes unless the
--password option is provided.
 
[localhost:determine_host_policy] using local policy Red Hat Enterprise Linux
[localhost:run_command] Running command hostname
[rhel8-1.examplerh.com:get_hostname] Hostname set to rhel8-1.examplerh.com
[rhel8-1.examplerh.com:_load_sos_info] sos version is 4.1
[rhel8-1.examplerh.com:run_command] Running command sosreport -l
[rhel8-1.examplerh.com:run_command] Running command sosreport --list-presets
[rhel8-1.examplerh.com:run_command] Running command oc whoami
[sos_collector:determine_cluster] Installation matches pacemaker, checking for layered profiles
Cluster type set to Pacemaker High Availability Cluster Manager
[rhel8-1.examplerh.com:run_command] Running command pcs status
Cluster failed to enumerate nodes: 'NoneType' object is not iterable
[pacemaker] Failed to get node list: 'NoneType' object is not iterable
[sos_collector:get_nodes_from_cluster] Node list: []
[sos_collector:reduce_node_list] Node list reduced to []
 
The following is a list of nodes to collect from:
	rhel8-1.examplerh.com
 
[archive:TarFileArchive] initialised empty FileCacheArchive at '/var/tmp/sos.kntlhfn6/sos-collector-999999-2022-03-18-talcu'
 
Connecting to nodes...
Collection would only gather from localhost due to failure to either enumerate or connect to cluster nodes. Assuming single collection from localhost is not desired.
Aborting...
[sos_collector:close_all_connections] Closing SSH connection to localhost

Version-Release number of selected component (if applicable):
sos-4.1-9.el8_5.noarch

How reproducible:
Every time

Steps to Reproduce:
1. Make sure pacemaker is started and pcs status shows nodes online.
2. sos collect -o corosync,pacemaker -vvvvv --all-logs --case-id=999999  --batch


Actual results:
It fails to get node list from "pcs status" output.

Expected results:
It should collect sosreports from all nodes in the cluster.

Additional info:
If i pass `--nodes` then it works. 
# sos collect -o corosync,pacemaker -vvvvv --all-logs --case-id=999999  --batch --nodes=rhel8-1.examplerh.com,rhel8-2.examplerh.com,rhel8-3.examplerh.com

Comment 2 Pavel Moravec 2022-06-16 10:16:06 UTC
*** Bug 2097673 has been marked as a duplicate of this bug. ***

Comment 5 Pavel Moravec 2022-06-17 10:12:13 UTC
Thanks for the testing of the RHEL8 clone. If it is difficult to prepare pacemaker cluset with RHEL9 nodes, we can do just sanity only check, I guess; I verified the source code is the same in both sos-4.3-2.el8.noarch and sos-4.3-2.el9.noarch - so until there is some change in either:
- pacemaker on RHEL8 vs pacemaker on RHEL9
- OS behaviour 
- or python3.6 / python3.9 behaviour over the same sos code,
then we can expect the fix will work well in RHEL9 as well.

Comment 17 errata-xmlrpc 2022-11-15 11:12:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (sos bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2022:8275


Note You need to log in before you can comment on or make changes to this bug.