Description of problem:
https://review.gluster.org/#/c/glusterfs/+/23488/ states the centos regression failed.
However, trying to access the details of the run returns an HTTP 404 error.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
So, the build is on the disk, I can pass it to you. It seems the build.xml file is missing, so that's why it doesn't appear in the UX. Not sure why this happened :/
The file is attached to this bug
Created attachment 1619490 [details]
So, looking at the log, we do have the problem on a regular basis. For the few jobs I did check, the log is truncated, I suspect that mean the build process somehow crashed. I am not sure how and when, but it happened for a few builds:
I guess I can focus on the recent problem (so 79xx), see where they got build and when.
After some massaging, that's the error I found:
Start time Thu Sep 26 03:02:45 UTC 2019
Start time Thu Sep 26 03:12:25 UTC 2019
Start time Thu Sep 26 04:00:03 UTC 2019
Start time Thu Sep 26 10:08:17 UTC 2019
Start time Thu Sep 26 12:44:30 UTC 2019
Start time Thu Sep 26 12:59:25 UTC 2019
That's the last line for each.
[04:55:27] Running tests in file ./tests/bugs/core/bug-913544.t
[04:54:52] Running tests in file ./tests/bugs/distribute/bug-1125824.t
[04:55:25] Running tests in file ./tests/basic/ec/gfapi-ec-open-truncate.t
[13:24:35] Running tests in file ./tests/bugs/snapshot/bug-1168875.t
[13:24:32] Running tests in file ./tests/basic/ctime/ctime-readdir.t
[13:23:43] Running tests in file ./tests/basic/afr/gfid-mismatch-resolution-with-cli.t
For the 04:55 problem, it seems to have happened while ansible was running.
The 13:24 line is a false positive, since the build is still going. So I will focus just on the 3 run from this morning, see what is interfering.
Ok so I found nothing special. I would be tempted to let it pass for now, unless the problem reproduce itself. It could be a network temporary issue, or anything like that, and unless I can reproduce, there is not enough logs for me to see anything (and also a ton of useless log..)
(In reply to M. Scherer from comment #6)
> Ok so I found nothing special. I would be tempted to let it pass for now,
> unless the problem reproduce itself. It could be a network temporary issue,
> or anything like that, and unless I can reproduce, there is not enough logs
> for me to see anything (and also a ton of useless log
Shall I go ahead and close this?
it seems to not have reappared again, so yeah, let's close it. It can be reopened anyway.