Bug 1491156 - Aborted test runs in jenkins don't have cores
Summary: Aborted test runs in jenkins don't have cores
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: GlusterFS
Classification: Community
Component: project-infrastructure
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Nigel Babu
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-09-13 08:09 UTC by Raghavendra G
Modified: 2018-04-12 15:02 UTC (History)
2 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2018-04-12 15:02:58 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Raghavendra G 2017-09-13 08:09:20 UTC
Description of problem:

I've two tests - smoke and centos regression [1][2]- that were aborted due to deadlock in Glusterfs processes. However, output from console doesn't give either backtrace or core of the process aborted. It would be helpful if we can get cores.

[1] https://build.gluster.org/job/centos6-regression/6332/console
[2] https://build.gluster.org/job/smoke/37110/

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Nigel Babu 2017-09-13 11:04:39 UTC
I'm guessing this means it wasn't a crash. The best way to do this would be to attach gdb to the process to generate the core, yes?

Comment 2 Niels de Vos 2017-09-13 11:37:24 UTC
(In reply to Nigel Babu from comment #1)
> I'm guessing this means it wasn't a crash. The best way to do this would be
> to attach gdb to the process to generate the core, yes?

That, or run 'gcore $PID' and 'gstack $PID' to make it a little easier.

Comment 3 Nigel Babu 2017-09-13 12:21:20 UTC
Is there a preferred command to get the right PID?

Comment 4 Niels de Vos 2017-09-13 12:39:56 UTC
(In reply to Nigel Babu from comment #3)
> Is there a preferred command to get the right PID?

I don't think so, that would require guessing what process hangs. You probably need to capture the stack+core from all gluster processes.

Comment 5 Nigel Babu 2018-04-12 15:02:58 UTC
We will no longer have hangs because of the per patch timeout. Closing this bug.


Note You need to log in before you can comment on or make changes to this bug.