Bug 1614631
Summary: | Spurious smoke failure in build rpms | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Pranith Kumar K <pkarampu> |
Component: | project-infrastructure | Assignee: | bugs <bugs> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | mainline | CC: | bugs, gluster-infra, ndevos, nigelb |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-08-13 02:54:40 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Pranith Kumar K
2018-08-10 05:58:12 UTC
The rest of the error message contains a little more details: INFO: mock.py version 1.4.11 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(glusterfs-4.2dev-0.240.git4657137.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot ERROR: Exception(glusterfs-4.2dev-0.240.git4657137.el7.src.rpm) Config(epel-7-x86_64) 0 minutes 0 seconds INFO: Results and/or logs in: /home/jenkins/root/workspace/devrpm-el7/RPMS/el7/x86_64/ INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot ERROR: Build root is locked by another process. Build step 'Execute shell' marked build as failure Archiving artifacts Finished: FAILURE The "ERROR: Build root is locked by another process." suggests that there was an other mock process building for epel-7-x86_64 on the system. Or, possibly a mock process was uncleanly exited and some pieces were left over. In order to prevent this in the future, I recommend to run mock with a --uniqueext=UNIQUEEXT option that is unique for the jobname+jobid. Cleaned up the old folder and verified that there were no duplicate Jenkins connections, which is when I've seen this happen in the past. |