Bug 1298735 - 'openstack overcloud image build --all' fails silently
Summary: 'openstack overcloud image build --all' fails silently
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director
Version: 7.0 (Kilo)
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: ---
: ---
Assignee: Angus Thomas
QA Contact: Omri Hochman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-01-14 21:04 UTC by Steve Linabery
Modified: 2016-10-18 04:14 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-10-18 04:14:23 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Steve Linabery 2016-01-14 21:04:00 UTC
Description of problem:
In a poodle CI run, image build is failing with exit status 0, despite failing to build images and this in the output from the command in $subj:

<snip>
++ export OS_COLLECT_CONFIG_VENV_DIR=/opt/stack/venvs/os-collect-config
++ OS_COLLECT_CONFIG_VENV_DIR=/opt/stack/venvs/os-collect-config
+ for env_file in '$env_files'
+ source /var/tmp/image.vN6l3ryX/hooks/root.d/../environment.d/10-os-net-config-venv-dir.bash
++ '[' -z '' ']'
++ export OS_NET_CONFIG_VENV_DIR=/opt/stack/venvs/os-net-config
++ OS_NET_CONFIG_VENV_DIR=/opt/stack/venvs/os-net-config
++ '[' -z '' ']'
++ export OS_NET_CONFIG_INSTALL_OPTS=
++ OS_NET_CONFIG_INSTALL_OPTS=
+ for env_file in '$env_files'
+ source /var/tmp/image.vN6l3ryX/hooks/root.d/../environment.d/10-rhel7-distro-name.bash
++ export DISTRO_NAME=rhel7
++ DISTRO_NAME=rhel7
+ for env_file in '$env_files'
+ source /var/tmp/image.vN6l3ryX/hooks/root.d/../environment.d/14-manifests
++ '[' 0 -gt 0 ']'
++ set -eu
++ set -o pipefail
++ export DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests
++ DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests
++ export DIB_MANIFEST_SAVE_DIR=overcloud-full.d/
++ DIB_MANIFEST_SAVE_DIR=overcloud-full.d/
+ for env_file in '$env_files'
+ source /var/tmp/image.vN6l3ryX/hooks/root.d/../environment.d/15-pip-manifests
++ set -eu
++ export DIB_MANIFEST_PIP_DIR=/etc/dib-manifests/dib-manifests-pip
++ DIB_MANIFEST_PIP_DIR=/etc/dib-manifests/dib-manifests-pip
+ for target in '$targets'
+ output 'Running /var/tmp/image.vN6l3ryX/hooks/root.d/01-ccache'
++ date
+ echo dib-run-parts Thu Jan 14 09:18:58 EST 2016 Running /var/tmp/image.vN6l3ryX/hooks/root.d/01-ccache
dib-run-parts Thu Jan 14 09:18:58 EST 2016 Running /var/tmp/image.vN6l3ryX/hooks/root.d/01-ccache
+ target_tag=01-ccache
+ date +%s.%N
+ /var/tmp/image.vN6l3ryX/hooks/root.d/01-ccache
+ target_tag=01-ccache
+ date +%s.%N
+ output '01-ccache completed'
++ date
+ echo dib-run-parts Thu Jan 14 09:18:58 EST 2016 01-ccache completed
dib-run-parts Thu Jan 14 09:18:58 EST 2016 01-ccache completed
+ for target in '$targets'
+ output 'Running /var/tmp/image.vN6l3ryX/hooks/root.d/10-rhel7-cloud-image'
++ date
+ echo dib-run-parts Thu Jan 14 09:18:58 EST 2016 Running /var/tmp/image.vN6l3ryX/hooks/root.d/10-rhel7-cloud-image
dib-run-parts Thu Jan 14 09:18:58 EST 2016 Running /var/tmp/image.vN6l3ryX/hooks/root.d/10-rhel7-cloud-image
+ target_tag=10-rhel7-cloud-image
+ date +%s.%N
+ /var/tmp/image.vN6l3ryX/hooks/root.d/10-rhel7-cloud-image
Getting /home/stack/.cache/image-create/rhel-guest-image-7.2-20151102.0.x86_64.qcow2.tgz.lock: Thu Jan 14 09:18:58 EST 2016
Repacking base image as tarball.
Working in /var/tmp/tmp.M65ZVQ2us6
qemu-img: Could not open 'rhel-guest-image-7.2-20151102.0.x86_64.qcow2': Could not open 'rhel-guest-image-7.2-20151102.0.x86_64.qcow2': No such file or directory

Comment 3 Mike Burns 2016-04-07 21:03:37 UTC
This bug did not make the OSP 8.0 release.  It is being deferred to OSP 10.

Comment 5 James Slagle 2016-10-14 15:17:33 UTC
Removing from OSP10, not a blocker/regression

Steve, can you also confirm if this is still an issue? Do we even still run these poodle CI jobs?

Comment 6 Steve Linabery 2016-10-18 04:14:23 UTC
The CI that produced this bug is gone, and the issue isn't reproduced with 10.


Note You need to log in before you can comment on or make changes to this bug.