Bug 2082914

Summary: sos collect fails to get node list from a pacemaker cluster
Product: Red Hat Enterprise Linux 9 Reporter: Pavel Moravec <pmoravec>
Component: sosAssignee: Pavel Moravec <pmoravec>
Status: CLOSED ERRATA QA Contact: Miroslav HradĂ­lek <mhradile>
Severity: high Docs Contact:
Priority: high    
Version: 9.1CC: agk, bmr, mhradile, nwahl, plambri, sbradley, theute
Target Milestone: rcKeywords: OtherQA, Triaged
Target Release: ---Flags: pm-rhel: mirror+
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: sos-4.3-2.el9 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-11-15 11:12:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Pavel Moravec 2022-05-08 13:53:24 UTC
This bug was initially created as a copy of Bug #2065805

I am copying this bug because: 

we need to be in par between 8.7 and 9.1


Description of problem:
Looks like there is a bug on getting list of nodes with `pcs status`. It appears we returning NONE from `pcs status` output so fails to get list of nodes. 
 
I am able to get pcs status:
#  pcs status
Cluster name: rhel8cluster
Cluster Summary:
  * Stack: corosync
  * Current DC: rhel8-1.examplerh.com (version 2.1.0-8.el8-7c3f660707) - partition with quorum
  * Last updated: Fri Mar 18 14:06:51 2022
  * Last change:  Fri Mar 18 13:59:29 2022 by root via cibadmin on rhel8-1.examplerh.com
  * 3 nodes configured
  * 3 resource instances configured (2 DISABLED)
 
Node List:
  * Online: [ rhel8-1.examplerh.com rhel8-2.examplerh.com rhel8-3.examplerh.com ]
 
Full List of Resources:
  * d-01	(ocf::pacemaker:Dummy):	 Stopped (disabled)
  * d-02	(ocf::pacemaker:Dummy):	 Stopped (disabled)
  * virtfence_xvm	(stonith:fence_xvm):	 Started rhel8-1.examplerh.com
 
Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled
 
-----------
 
Fails to collect sosreports because it cannot get list of nodes.
# sos collect -o corosync,pacemaker -vvvvv --all-logs --case-id=999999  --batch
[sos_collector:__init__] Executing /usr/sbin/sos collect -o corosync,pacemaker -vvvvv --all-logs --case-id=999999 --batch
[sos_collector:__init__] Found cluster profiles: dict_keys(['jbon', 'kubernetes', 'ocp', 'ovirt', 'rhhi_virt', 'rhv', 'pacemaker', 'satellite'])
 
sos-collector (version 4.1)
 
This utility is used to collect sosreports from multiple nodes simultaneously.
It uses OpenSSH's ControlPersist feature to connect to nodes and run commands
remotely. If your system installation of OpenSSH is older than 5.6, please
upgrade.
 
An archive of sosreport tarballs collected from the nodes will be generated in
/var/tmp/sos.kntlhfn6 and may be provided to an appropriate support
representative.
 
The generated archive may contain data considered sensitive and its content
should be reviewed by the originating organization before being passed to any
third party.
 
No configuration changes will be made to the system running this utility or
remote systems that it connects to.
 
[sos_collector:configure_sos_cmd] Initial sos cmd set to sosreport --batch --case-id=999999 --all-logs -c auto
[sos_collector:prep] password not specified, assuming SSH keys
sos-collector ASSUMES that SSH keys are installed on all nodes unless the
--password option is provided.
 
[localhost:determine_host_policy] using local policy Red Hat Enterprise Linux
[localhost:run_command] Running command hostname
[rhel8-1.examplerh.com:get_hostname] Hostname set to rhel8-1.examplerh.com
[rhel8-1.examplerh.com:_load_sos_info] sos version is 4.1
[rhel8-1.examplerh.com:run_command] Running command sosreport -l
[rhel8-1.examplerh.com:run_command] Running command sosreport --list-presets
[rhel8-1.examplerh.com:run_command] Running command oc whoami
[sos_collector:determine_cluster] Installation matches pacemaker, checking for layered profiles
Cluster type set to Pacemaker High Availability Cluster Manager
[rhel8-1.examplerh.com:run_command] Running command pcs status
Cluster failed to enumerate nodes: 'NoneType' object is not iterable
[pacemaker] Failed to get node list: 'NoneType' object is not iterable
[sos_collector:get_nodes_from_cluster] Node list: []
[sos_collector:reduce_node_list] Node list reduced to []
 
The following is a list of nodes to collect from:
	rhel8-1.examplerh.com
 
[archive:TarFileArchive] initialised empty FileCacheArchive at '/var/tmp/sos.kntlhfn6/sos-collector-999999-2022-03-18-talcu'
 
Connecting to nodes...
Collection would only gather from localhost due to failure to either enumerate or connect to cluster nodes. Assuming single collection from localhost is not desired.
Aborting...
[sos_collector:close_all_connections] Closing SSH connection to localhost

Version-Release number of selected component (if applicable):
sos-4.1-9.el8_5.noarch

How reproducible:
Every time

Steps to Reproduce:
1. Make sure pacemaker is started and pcs status shows nodes online.
2. sos collect -o corosync,pacemaker -vvvvv --all-logs --case-id=999999  --batch


Actual results:
It fails to get node list from "pcs status" output.

Expected results:
It should collect sosreports from all nodes in the cluster.

Additional info:
If i pass `--nodes` then it works. 
# sos collect -o corosync,pacemaker -vvvvv --all-logs --case-id=999999  --batch --nodes=rhel8-1.examplerh.com,rhel8-2.examplerh.com,rhel8-3.examplerh.com

Comment 2 Pavel Moravec 2022-06-16 10:16:06 UTC
*** Bug 2097673 has been marked as a duplicate of this bug. ***

Comment 5 Pavel Moravec 2022-06-17 10:12:13 UTC
Thanks for the testing of the RHEL8 clone. If it is difficult to prepare pacemaker cluset with RHEL9 nodes, we can do just sanity only check, I guess; I verified the source code is the same in both sos-4.3-2.el8.noarch and sos-4.3-2.el9.noarch - so until there is some change in either:
- pacemaker on RHEL8 vs pacemaker on RHEL9
- OS behaviour 
- or python3.6 / python3.9 behaviour over the same sos code,
then we can expect the fix will work well in RHEL9 as well.

Comment 17 errata-xmlrpc 2022-11-15 11:12:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (sos bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2022:8275