Bug 1973410

Summary: Must-gather, Ceph files do not exist when restart worker node where "must-gather" pod running
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Oded <oviner>
Component: must-gatherAssignee: yati padia <ypadia>
Status: CLOSED WONTFIX QA Contact: Elad <ebenahar>
Severity: low Docs Contact:
Priority: unspecified    
Version: 4.8CC: muagarwa, ocs-bugs, odf-bz-bot, sabose
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-02-08 12:46:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Oded 2021-06-17 18:56:23 UTC
Description of problem (please be detailed as possible and provide log
snippests):
Must-gather, Ceph files do not exist when restart worker node where "must-gather" pod running

Version of all relevant components (if applicable):
OCS version: ocs-operator.v4.8.0-416.ci
OCP version:4.8.0-0.nightly-2021-06-13-101614
Provider: vmware
type: lso

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.Run must gather command
$ oc adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.8


2.Check where must-gather pod running: [parallel to step 1]
$ oc get pods -o wide| grep mus
must-gather-n4vxn-helper                                          1/1     Running     0          10s    10.129.2.125   compute-2   <none>           <none>


3.Restart node where must-gather pod running [parallel to step 1]
$ oc get node compute-2
NAME        STATUS     ROLES    AGE     VERSION
compute-2   NotReady   worker   3d16h   v1.21.0-rc.0+120883f
$ oc get node compute-2
NAME        STATUS   ROLES    AGE     VERSION
compute-2   Ready    worker   3d16h   v1.21.0-rc.0+120883f


4.Verify must-gather directory content.
Ceph files do not exist [Fail!!]

E           ['ceph-volume_raw_list', 'ceph_auth_list', 'ceph_balancer_status', 'ceph_config-key_ls', 'ceph_config_dump', 'ceph_crash_stat', 'ceph_device_ls', 'ceph_fs_dump', 'ceph_fs_ls', 'ceph_fs_status', 'ceph_fs_subvolumegroup_ls_ocs-storagecluster-cephfilesystem', 'ceph_health_detail', 'ceph_mds_stat', 'ceph_mgr_dump', 'ceph_mgr_module_ls', 'ceph_mgr_services', 'ceph_mon_dump', 'ceph_mon_stat', 'ceph_osd_blocked-by', 'ceph_osd_crush_class_ls', 'ceph_osd_crush_dump', 'ceph_osd_crush_rule_dump', 'ceph_osd_crush_rule_ls', 'ceph_osd_crush_show-tunables', 'ceph_osd_crush_weight-set_dump', 'ceph_osd_df', 'ceph_osd_df_tree', 'ceph_osd_dump', 'ceph_osd_getmaxosd', 'ceph_osd_lspools', 'ceph_osd_numa-status', 'ceph_osd_perf', 'ceph_osd_pool_ls_detail', 'ceph_osd_stat', 'ceph_osd_tree', 'ceph_osd_utilization', 'ceph_pg_dump', 'ceph_pg_stat', 'ceph_quorum_status', 'ceph_report', 'ceph_service_dump', 'ceph_status', 'ceph_time-sync-status', 'ceph_versions', 'ceph_df_detail']
E           ['ceph_auth_list_--format_json-pretty', 'ceph_balancer_pool_ls_--format_json-pretty', 'ceph_balancer_status_--format_json-pretty', 'ceph_config-key_ls_--format_json-pretty', 'ceph_config_dump_--format_json-pretty', 'ceph_crash_ls_--format_json-pretty', 'ceph_crash_stat_--format_json-pretty', 'ceph_device_ls_--format_json-pretty', 'ceph_fs_dump_--format_json-pretty', 'ceph_fs_ls_--format_json-pretty', 'ceph_fs_status_--format_json-pretty', 'ceph_fs_subvolumegroup_ls_ocs-storagecluster-cephfilesystem_--format_json-pretty', 'ceph_health_detail_--format_json-pretty', 'ceph_mds_stat_--format_json-pretty', 'ceph_mgr_dump_--format_json-pretty', 'ceph_mgr_module_ls_--format_json-pretty', 'ceph_mgr_services_--format_json-pretty', 'ceph_mon_dump_--format_json-pretty', 'ceph_mon_stat_--format_json-pretty', 'ceph_osd_blacklist_ls_--format_json-pretty', 'ceph_osd_blocked-by_--format_json-pretty', 'ceph_osd_crush_class_ls_--format_json-pretty', 'ceph_osd_crush_dump_--format_json-pretty', 'ceph_osd_crush_rule_dump_--format_json-pretty', 'ceph_osd_crush_rule_ls_--format_json-pretty', 'ceph_osd_crush_show-tunables_--format_json-pretty', 'ceph_osd_crush_weight-set_dump_--format_json-pretty', 'ceph_osd_crush_weight-set_ls_--format_json-pretty', 'ceph_osd_df_--format_json-pretty', 'ceph_osd_df_tree_--format_json-pretty', 'ceph_osd_dump_--format_json-pretty', 'ceph_osd_getmaxosd_--format_json-pretty', 'ceph_osd_lspools_--format_json-pretty', 'ceph_osd_numa-status_--format_json-pretty', 'ceph_osd_perf_--format_json-pretty', 'ceph_osd_pool_ls_detail_--format_json-pretty', 'ceph_osd_stat_--format_json-pretty', 'ceph_osd_tree_--format_json-pretty', 'ceph_osd_utilization_--format_json-pretty', 'ceph_pg_dump_--format_json-pretty', 'ceph_pg_stat_--format_json-pretty', 'ceph_progress_--format_json-pretty', 'ceph_progress_json', 'ceph_progress_json_--format_json-pretty', 'ceph_quorum_status_--format_json-pretty', 'ceph_report_--format_json-pretty', 'ceph_service_dump_--format_json-pretty', 'ceph_status_--format_json-pretty', 'ceph_time-sync-status_--format_json-pretty', 'ceph_versions_--format_json-pretty', 'ceph_df_detail_--format_json-pretty']
E           ['pools_rbd_ocs-storagecluster-cephblockpool']

**The must-gahter directory includes these files when collect must-gahter without restart node


Actual results:
Ceph files do not exist

Expected results:
The must-gahter directory includes these files when collect must-gahter without restart node

Additional info:

Comment 2 Mudit Agarwal 2021-06-24 03:12:58 UTC
Not a 4.8 blocker.

Comment 4 Mudit Agarwal 2021-10-14 16:05:15 UTC
Can't fix it before 4.9 ev freeze and not a blocker.

Comment 5 Mudit Agarwal 2022-02-08 12:46:36 UTC
Not an easy fix and not a common scenario. Marking it as a WONT_FIX, we can run must-gather again in such situations.