Bug 2101916 - must-gather is not collecting ceph logs or coredumps
Summary: must-gather is not collecting ceph logs or coredumps
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: must-gather
Version: 4.10
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ODF 4.13.0
Assignee: yati padia
QA Contact: Joy John Pinto
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-06-28 19:02 UTC by Josh Durgin
Modified: 2023-08-09 16:35 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-06-21 15:22:14 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage odf-must-gather pull 11 0 None Merged Fix the call to crash_core_collection function. 2023-03-28 10:20:48 UTC
Red Hat Product Errata RHBA-2023:3742 0 None None None 2023-06-21 15:23:07 UTC

Description Josh Durgin 2022-06-28 19:02:01 UTC
Description of problem (please be detailed as possible and provide log
snippests):

Ceph daemon logs and coredumps which are critical for debugging are no longer being collected. This was implemented via: https://bugzilla.redhat.com/show_bug.cgi?id=1869406

https://github.com/red-hat-storage/ocs-operator/blob/main/must-gather/collection-scripts/gather_ceph_resources#L381-L405


Version of all relevant components (if applicable):

ODF/OCP 4.10

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

This is a supportability issue. Without historical logs, and without coredumps, many problems are impossible to debug.

Is there any workaround available to the best of your knowledge?

No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

1

Can this issue reproducible?

Yes

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Start a ceph cluster
2. crash a ceph daemon (e.g. kill -SIGABRT) to trigger a coredump
3. Run must-gather
4. Check for logs from ceph daemons, e.g. ceph-osd.0.log, and coredumps, in the must-gather output


Actual results:
No logs or coredumps


Expected results:
Log files and coredumps are saved

Additional info:

An example is https://bugzilla.redhat.com/show_bug.cgi?id=2095331 - specifically http://rhsqe-repo.lab.eng.blr.redhat.com/OCS/ocs-qe-bugs/bz-aman/9jun22/ where rbd-mirror crashed multiple times (as shown by all the crash metadata files in c2/must-gather.local.4254235368011215624/quay-io-rhceph-dev-ocs-must-gather-sha256-8d5835527008fbde909bb1f570e4be06ac6b9b9bb76369cd934d55979c6021bf/ceph/must_gather_commands) but there are no logs or coredumps available. There should be a directory for each node (e.g. named compute-0 in this case) where these are saved.

Comment 4 yati padia 2022-10-13 05:27:49 UTC
@jdurgin can I get a cluster to debug the above? I need to check if the debug nodes contain the crash logs and coredumps.
As per correct code, it does extract the crash from `/host/var/lib/rook/openshift-storage/crash/`. We may first need to collect the logs in that location to extract them.

Comment 12 Joy John Pinto 2023-05-31 04:42:31 UTC
Verified in OCP 4.13.0-0.nightly-2023-05-23-145816 and ODF 4.13.0-205.stable provided by Red Hat

1. Induce osd crash by deleting osd disk from vcenter
2. Ceph crash ls output
   sh-5.1$ ceph crash ls
ID                                                                ENTITY  NEW  
2023-05-30T12:41:00.575000Z_c155cfd3-5f6f-4207-b924-ad292d900e01  osd.0    *   
sh-5.1$ 
3. Ran must gather command and verified crash log detals under 'must-gather.local.1711890767482071931/quay-io-rhceph-dev-ocs-must-gather-sha25610071ddc29383af01d60eadfa4d6f2bd631cfd4c06fcdf7efdb655a84b13a4f1/ceph/crash_compute-0/2023-05-30T12:41:00.575000Z_c155cfd3-5f6f-4207-b924-ad292d900e01'
[jopinto@jopinto 2023-05-30T12:41:00.575000Z_c155cfd3-5f6f-4207-b924-ad292d900e01]$ pwd
/home/jopinto/Desktop/temp/serv/must-gather.local.1711890767482071931/quay-io-rhceph-dev-ocs-must-gather-sha256-10071ddc29383af01d60eadfa4d6f2bd631cfd4c06fcdf7efdb655a84b13a4f1/ceph/crash_compute-0/2023-05-30T12:41:00.575000Z_c155cfd3-5f6f-4207-b924-ad292d900e01
[jopinto@jopinto 2023-05-30T12:41:00.575000Z_c155cfd3-5f6f-4207-b924-ad292d900e01]$ ls
log  meta
[jopinto@jopinto 2023-05-30T12:41:00.575000Z_c155cfd3-5f6f-4207-b924-ad292d900e01]$

[jopinto@jopinto must_gather_commands]$ cd ..
[jopinto@jopinto ceph]$ ls
ceph_daemon_log_compute-0  coredump_compute-0  journal_compute-0  kernel_compute-0  logs                              namespaces
ceph_daemon_log_compute-1  crash_compute-0     journal_compute-1  kernel_compute-1  must_gather_commands              timestamp
ceph_daemon_log_compute-2  event-filter.html   journal_compute-2  kernel_compute-2  must_gather_commands_json_output
[jopinto@jopinto ceph]$ cd coredump_compute-0/
[jopinto@jopinto coredump_compute-0]$ ls
core.ceph-osd.167.d3d56641115f497da1ffde8db3d27bc1.20207.1685450460000000.lz4

Comment 13 errata-xmlrpc 2023-06-21 15:22:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.13.0 enhancement and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:3742


Note You need to log in before you can comment on or make changes to this bug.