Description of problem: https://review.gluster.org/#/c/glusterfs/+/23488/ states the centos regression failed. However, trying to access the details of the run returns an HTTP 404 error. https://build.gluster.org/job/centos7-regression/7972/ Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
So, the build is on the disk, I can pass it to you. It seems the build.xml file is missing, so that's why it doesn't appear in the UX. Not sure why this happened :/ The file is attached to this bug
Created attachment 1619490 [details] log file
So, looking at the log, we do have the problem on a regular basis. For the few jobs I did check, the log is truncated, I suspect that mean the build process somehow crashed. I am not sure how and when, but it happened for a few builds: 1818 1824 1827 1832 1833 2013 2015 2016 2018 2314 2316 2317 2318 2319 2321 2322 2323 2335 2574 2575 2576 2577 2654 2655 2659 6402 6403 6404 7971 7972 7973 7976 7979 7980 I guess I can focus on the recent problem (so 79xx), see where they got build and when.
After some massaging, that's the error I found: builder203.aws.gluster.org Start time Thu Sep 26 03:02:45 UTC 2019 builder209.aws.gluster.org Start time Thu Sep 26 03:12:25 UTC 2019 builder200.aws.gluster.org Start time Thu Sep 26 04:00:03 UTC 2019 builder203.aws.gluster.org Start time Thu Sep 26 10:08:17 UTC 2019 builder209.aws.gluster.org Start time Thu Sep 26 12:44:30 UTC 2019 builder201.aws.gluster.org Start time Thu Sep 26 12:59:25 UTC 2019
That's the last line for each. [04:55:27] Running tests in file ./tests/bugs/core/bug-913544.t [04:54:52] Running tests in file ./tests/bugs/distribute/bug-1125824.t [04:55:25] Running tests in file ./tests/basic/ec/gfapi-ec-open-truncate.t [13:24:35] Running tests in file ./tests/bugs/snapshot/bug-1168875.t [13:24:32] Running tests in file ./tests/basic/ctime/ctime-readdir.t [13:23:43] Running tests in file ./tests/basic/afr/gfid-mismatch-resolution-with-cli.t For the 04:55 problem, it seems to have happened while ansible was running. The 13:24 line is a false positive, since the build is still going. So I will focus just on the 3 run from this morning, see what is interfering.
Ok so I found nothing special. I would be tempted to let it pass for now, unless the problem reproduce itself. It could be a network temporary issue, or anything like that, and unless I can reproduce, there is not enough logs for me to see anything (and also a ton of useless log..)
Thanks Michael.
(In reply to M. Scherer from comment #6) > Ok so I found nothing special. I would be tempted to let it pass for now, > unless the problem reproduce itself. It could be a network temporary issue, > or anything like that, and unless I can reproduce, there is not enough logs > for me to see anything (and also a ton of useless log Shall I go ahead and close this?
it seems to not have reappared again, so yeah, let's close it. It can be reopened anyway.