Bug 1609596 - "no build executed" failure in regression
Summary: "no build executed" failure in regression
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: project-infrastructure
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-30 01:20 UTC by Atin Mukherjee
Modified: 2018-08-03 10:11 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-08-03 10:11:16 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Atin Mukherjee 2018-07-30 01:20:48 UTC
Description of problem:

Patch set 7 of https://review.gluster.org/#/c/20584/ had a report of "No Builds Executed" . Not sure what does that mean as there wasn't any voting back. Also on patch set 8, the smoke failed, link - https://build.gluster.org/job/devrpm-el7/10229/ . 

Another observation what I have is even though the exit_on_failure was set to 'no' in run-tests.sh , however the regression didn't cover all the tests, link - https://build.gluster.org/job/centos7-regression/1965/
 
Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Nigel Babu 2018-07-30 03:54:47 UTC
The logs don't really indicate why patchset #7 did not trigger a smoke build. However, when patchset 8 was pushed, it triggered a message for patchset 7 to say "No Builds Exected". And then it went on to trigger the smoke builds. I will dig into Jenkins source code to figure out what happened.

Comment 2 Nigel Babu 2018-08-03 10:11:16 UTC
Well, that was traced down to something wrong with Jenkins. My strong suspicion is after the latest Java update, Jenkins started to behave inconsistently. We eventually had to do a restart and this has been good ever since.


Note You need to log in before you can comment on or make changes to this bug.