Hide Forgot
Description of problem: Build failed when do a jenkins master build on openshift with container installation Version-Release number of selected component (if applicable): openshift v3.3.0.30 kubernetes v1.3.0+52492b4 etcd 2.3.0+git registry.ops.../jenkins-1-rhel7 (a2b7f45b9e0d) How reproducible: always Steps to Reproduce: 1.Create a jenkins master app. oc new-app https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/master-slave/jenkins-master-template.json 2.Check jenkins master build 3. Actual results: $ oc build-logs jenkins-master-1 Cloning "git://github.com/openshift/origin" ... Commit: 31f2273690a6b67f3d3dd477f25ab0d365505bf7 (Merge pull request #10810 from lixiaobing10051267/masterTypo) Author: Dan McPherson <dmcphers> Date: Wed Sep 7 07:09:48 2016 -0400 ---> Copying repository files ... ---> Installing Jenkins 8 plugins using /opt/openshift/plugins.txt ... Downloading git-2.4.2 ... Downloading git-client-1.19.0 ... Downloading ssh-credentials-1.11 ... Downloading mailer-1.16 ... Downloading matrix-project-1.6 ... Downloading scm-api-1.0 ... Downloading script-security-1.13 ... Downloading junit-1.2 ... ---> Removing sample Jenkins job ... rm: cannot remove '/opt/openshift/configuration/jobs/OpenShift Sample': Directory not empty ---> Installing new Jenkins configuration ... mv: cannot move '/tmp/tmp.KROeHhepEvjenkins/configuration/jobs' to '/opt/openshift/configuration/jobs': Directory not empty error: build error: non-zero (13) exit code from registry.ops...jenkins-1-rhel7@sha256:4be90f13eb930e3f8dfbf604da7b7d6bec4287cc11a156e66b39fcb61336d037 Expected results: Should build successfully. Additional info: Build successful on openshift with rpm installation, and no error in buildlog
Cesar has the most experience in running openshift in a container and the weirdness that can ensue. Can you provide more details on how you setup your "openshift in a container" environment?
*** Bug 1374127 has been marked as a duplicate of this bug. ***
I was taking a peek at this as well while Ben was updating. I'll post my notes just in case it helps: So that failure corresponds to this line in the s2i assemble script for jenkins: https://github.com/openshift/jenkins/blob/master/1/contrib/s2i/assemble#L30 At first blush, I would think that `rm -rf ${JENKINS_DIR}/configuration/jobs` would work because of these calls to fix-permissions and chown: https://github.com/openshift/jenkins/blob/master/1/Dockerfile.rhel7#L60-L61 I didn't know what what the actual image location referred to by "registry.ops.../jenkins-1-rhel7 (a2b7f45b9e0d)", so I'm not sure how old it is or if has the changes to the Dockerfile.rhel7 I just mentioned.
I cannot reproduce this bug on a local 'cluster up' install which is running openshift containerized. It'd be also good to get a higher level of logging from the build: oc set env bc/jenkins-master BUILD_LOGLEVEL=8 and run the build again: oc start-build bc/jenkins-master --follow
The fundamental issue here is that it works fine in a "normal" openshift cluster, but fails when the cluster is being run inside containers... which should make absolutely no difference to the s2i assemble script running inside the jenkins container that i can think of.
Perhaps a debug run could be performed while running in a container where the assemble script is modified to do a ls -la ${JENKINS_DIR}/configuration/jobs ls -la ${JENKINS_DIR}/configuration ls -la ${JENKINS_DIR} prior to the rm -rf ?
Gabe, should the script have 'set -e' at the beginning? Even if something failed earlier, the script would just keep going.
Cesar, yeah, the job configuration for the slave really shouldn't have the openshift sample job as one of the jobs. We don't want to build an image like that I would think.
@dyan did you use ansible to setup the containerized install?
yes, I think so. I use our jenkins job to install openshift, but I'm not familiar with the installation process, sorry for that.
This appears to be overlayfs storage driver specific, assigning to containers team.
Is there a simple repeater you could generate to show this?
I can launch an openshift env if you need
Could you attempt this with the docker-1.12 version using overlay2 as the back end? There have been some changes in this code.
Paste "docker info" output as well. Noting in bug proves that it is an overlay issue. So provide a low level reproducer. Following is error message and that does not suggest it is an overlay issue. ---> Removing sample Jenkins job ... rm: cannot remove '/opt/openshift/configuration/jobs/OpenShift Sample': Directory not empty ---> Installing new Jenkins configuration ... mv: cannot move '/tmp/tmp.KROeHhepEvjenkins/configuration/jobs' to '/opt/openshift/configuration/jobs': Directory not empty
Vivek did you read the comment history? this issue only occurs when the overlay filesystem is used with docker. it doesn't happy with device mapper. https://bugzilla.redhat.com/show_bug.cgi?id=1374249#c14
"examples/jenkins/master-slave" sub-dir has been clean up (https://github.com/openshift/origin/pull/11844), and cannot reproduce this issue any more
This should be working in OCP v3.5.0.10 or newer.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0884