Description of problem: When running dib in "openstack undercloud install" the dib run fails because a repo file already exists. Version-Release number of selected component (if applicable): diskimage-builder-0.1.46-3.el7ost.noarch How reproducible: 3/3 of my tests on the most recent puddle have shown this issue Steps to Reproduce: 1. Install latest daily stable poodle 2. Set yum repo conf per instructions in http://openstack.etherpad.corp.redhat.com/rhel-osp-director-puddle-2015-09-25-3 export DIB_YUM_REPO_CONF="/etc/yum.repos.d/rhos-release-7.repo /etc/yum.repos.d/rhos-release-rhel-7.1.repo /etc/yum.repos.d/rhos-release-7-director.repo" 3. Run "openstack undercloud install" Actual results: The command exits with a failure + '[' -z '/etc/yum.repos.d/rhos-release-7.repo /etc/yum.repos.d/rhos-release-rhel-7.1.repo /etc/yum.repos.d/rhos-release-7-director.repo' ']' + for file in '$DIB_YUM_REPO_CONF' + '[' '!' -f /etc/yum.repos.d/rhos-release-7.repo ']' + sudo cp -L -f /etc/yum.repos.d/rhos-release-7.repo /tmp/instack.kVoH_7/mnt/etc/yum.repos.d cp: ‘/etc/yum.repos.d/rhos-release-7.repo’ and ‘/tmp/instack.kVoH_7/mnt/etc/yum.repos.d/rhos-release-7.repo’ are the same file INFO: 2015-09-29 11:41:46,147 -- ############### End stdout/stderr logging ############### ERROR: 2015-09-29 11:41:46,147 -- Hook FAILED. ERROR: 2015-09-29 11:41:46,147 -- Failed running command ['dib-run-parts', u'/tmp/tmp1XjKLD/extra-data.d'] File "/usr/lib/python2.7/site-packages/instack/main.py", line 163, in main em.run() File "/usr/lib/python2.7/site-packages/instack/runner.py", line 79, in run self.run_hook(hook) File "/usr/lib/python2.7/site-packages/instack/runner.py", line 174, in run_hook raise Exception("Failed running command %s" % command) ERROR: 2015-09-29 11:41:46,147 -- None Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 519, in install _run_instack(instack_env) File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 454, in _run_instack _run_live_command(args, instack_env, 'instack') File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 297, in _run_live_command raise RuntimeError('%s failed. See log for details.', name) RuntimeError: ('%s failed. See log for details.', 'instack') ERROR: openstack Command 'instack-install-undercloud' returned non-zero exit status 1 Expected results: Undercloud services are installed/run. Additional info: I checked, and the repo file looks correct in the /tmp path. So in "/usr/share/diskimage-builder/elements/yum/extra-data.d/99-yum-repo-conf" I changed the copy command from: + sudo cp -L -f /etc/yum.repos.d/rhos-release-7.repo /tmp/instack.kVoH_7/mnt/etc/yum.repos.d to: + sudo cp -L -f /etc/yum.repos.d/rhos-release-7.repo /tmp/instack.kVoH_7/mnt/etc/yum.repos.d || true After that change, the dib and puppet run completed successfully.
so the issue here is that you dont define this export prior to installing "openstack undercloud install" i can see how the puddle instrcutions cause some confusion here. But when it's talking about a virt deployments or image builds, the virt deployment bit means prior to running instack-virt-setup. i'll update them accordingly. still i think the issue you're pointing out can be handled better, so will leave this open for y2.
So the DIB_YUM_REPO_CONF environment variable needs to be set before yum installing the rdomanager oscplugin?
I've pushed an upstream change to unset DIB_YUM_REPO_CONF before doing the undercloud install: https://review.openstack.org/#/c/271557/ That should fix this bug, even if the image build is done before the undercloud install (which is what typically causes the problem). It is worth noting however that this isn't something customers would see since we ship pre-built images.
This has been fixed for multiple releases. Just needs verification from QE. This would involve setting DIB_YUM_REPO_CONF for the image build and then running the undercloud install. It should no longer fail on the "same file" error as reported in this bz.
Deployment of latest 7 passes downstream automation.