Bug 1609596

Summary: "no build executed" failure in regression
Product: [Community] GlusterFS Reporter: Atin Mukherjee <amukherj>
Component: project-infrastructureAssignee: bugs <bugs>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs, gluster-infra, nigelb
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-08-03 10:11:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Atin Mukherjee 2018-07-30 01:20:48 UTC
Description of problem:

Patch set 7 of https://review.gluster.org/#/c/20584/ had a report of "No Builds Executed" . Not sure what does that mean as there wasn't any voting back. Also on patch set 8, the smoke failed, link - https://build.gluster.org/job/devrpm-el7/10229/ . 

Another observation what I have is even though the exit_on_failure was set to 'no' in run-tests.sh , however the regression didn't cover all the tests, link - https://build.gluster.org/job/centos7-regression/1965/
 
Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Nigel Babu 2018-07-30 03:54:47 UTC
The logs don't really indicate why patchset #7 did not trigger a smoke build. However, when patchset 8 was pushed, it triggered a message for patchset 7 to say "No Builds Exected". And then it went on to trigger the smoke builds. I will dig into Jenkins source code to figure out what happened.

Comment 2 Nigel Babu 2018-08-03 10:11:16 UTC
Well, that was traced down to something wrong with Jenkins. My strong suspicion is after the latest Java update, Jenkins started to behave inconsistently. We eventually had to do a restart and this has been good ever since.