Bug 1755700 - 404 error : https://build.gluster.org/job/centos7-regression/7972/consoleFull
Summary: 404 error : https://build.gluster.org/job/centos7-regression/7972/consoleFull
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: GlusterFS
Classification: Community
Component: project-infrastructure
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-09-26 05:31 UTC by Nithya Balachandran
Modified: 2019-11-05 11:52 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2019-11-05 11:52:01 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
log file (229.63 KB, application/gzip)
2019-09-26 12:07 UTC, M. Scherer
no flags Details

Description Nithya Balachandran 2019-09-26 05:31:06 UTC
Description of problem:

https://review.gluster.org/#/c/glusterfs/+/23488/ states the centos regression failed.
However, trying to access the details of the run returns an HTTP 404 error.

https://build.gluster.org/job/centos7-regression/7972/

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 M. Scherer 2019-09-26 12:06:55 UTC
So, the build is on the disk, I can pass it to you. It seems the build.xml file is missing, so that's why it doesn't appear in the UX. Not sure why this happened :/

The file is attached to this bug

Comment 2 M. Scherer 2019-09-26 12:07:23 UTC
Created attachment 1619490 [details]
log file

Comment 3 M. Scherer 2019-09-26 13:11:35 UTC
So, looking at the log, we do have the problem on a regular basis. For the few jobs I did check, the log is truncated, I suspect that mean the build process somehow crashed. I am not sure how and when, but it happened for a few builds:

1818
1824
1827
1832
1833
2013
2015
2016
2018
2314
2316
2317
2318
2319
2321
2322
2323
2335
2574
2575
2576
2577
2654
2655
2659
6402
6403
6404
7971
7972
7973
7976
7979
7980


I guess I can focus on the recent problem (so 79xx), see where they got build and when.

Comment 4 M. Scherer 2019-09-26 13:18:03 UTC
After some massaging, that's the error I found:

builder203.aws.gluster.org 
Start time Thu Sep 26 03:02:45 UTC 2019

builder209.aws.gluster.org 
Start time Thu Sep 26 03:12:25 UTC 2019

builder200.aws.gluster.org 
Start time Thu Sep 26 04:00:03 UTC 2019

builder203.aws.gluster.org 
Start time Thu Sep 26 10:08:17 UTC 2019

builder209.aws.gluster.org 
Start time Thu Sep 26 12:44:30 UTC 2019

builder201.aws.gluster.org 
Start time Thu Sep 26 12:59:25 UTC 2019

Comment 5 M. Scherer 2019-09-26 13:33:33 UTC
That's the last line for each.

[04:55:27] Running tests in file ./tests/bugs/core/bug-913544.t
[04:54:52] Running tests in file ./tests/bugs/distribute/bug-1125824.t
[04:55:25] Running tests in file ./tests/basic/ec/gfapi-ec-open-truncate.t

[13:24:35] Running tests in file ./tests/bugs/snapshot/bug-1168875.t
[13:24:32] Running tests in file ./tests/basic/ctime/ctime-readdir.t
[13:23:43] Running tests in file ./tests/basic/afr/gfid-mismatch-resolution-with-cli.t

For the 04:55 problem, it seems to have happened while ansible was running.
The 13:24 line is a false positive, since the build is still going. So I will focus just on the 3 run from this morning, see what is interfering.

Comment 6 M. Scherer 2019-09-26 13:46:22 UTC
Ok so I found nothing special. I would be tempted to let it pass for now, unless the problem reproduce itself. It could be a network temporary issue, or anything like that, and unless I can reproduce, there is not enough logs for me to see anything (and also a ton of useless log..)

Comment 7 Nithya Balachandran 2019-09-27 04:00:58 UTC
Thanks Michael.

Comment 8 Nithya Balachandran 2019-11-05 06:22:39 UTC
(In reply to M. Scherer from comment #6)
> Ok so I found nothing special. I would be tempted to let it pass for now,
> unless the problem reproduce itself. It could be a network temporary issue,
> or anything like that, and unless I can reproduce, there is not enough logs
> for me to see anything (and also a ton of useless log


Shall I go ahead and close this?

Comment 9 M. Scherer 2019-11-05 11:52:01 UTC
it seems to not have reappared again, so yeah, let's close it. It can be reopened anyway.


Note You need to log in before you can comment on or make changes to this bug.