| Summary: | [OSP-Director][9.0] : undercloud installation fails over missing variable 'net.ipv6.ip_nonlocal_bind' in /sbin/sysctl (happens on BM) . | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Omri Hochman <ohochman> |
| Component: | documentation | Assignee: | Dan Macpherson <dmacpher> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | RHOS Documentation Team <rhos-docs> |
| Severity: | urgent | Docs Contact: | |
| Priority: | high | ||
| Version: | 9.0 (Mitaka) | CC: | anande, augol, cpaquin, dbecker, dmacpher, ipilcher, jason.dobies, jcoufal, jraju, mburns, mcornea, morazi, owalsh, rhel-osp-director-maint, sasha, sathlang, sclewis, smalleni, srevivo, tvignaud |
| Target Milestone: | async | Keywords: | Triaged |
| Target Release: | 9.0 (Mitaka) | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Known Issue | |
| Doc Text: |
The kernel requirement for undercloud installation, is to have at least: kernel x86_64 3.10.0-327.28.3.el7
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-03-20 14:08:36 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
investigation: (1) attempting to run the command manually : [root@undercloud72 ipv6]# /sbin/sysctl net.ipv6.ip_nonlocal_bind sysctl: cannot stat /proc/sys/net/ipv6/ip_nonlocal_bind: No such file or directory (2) check available variables: [root@undercloud72 ipv6]# /sbin/sysctl -a | grep net.ipv6.ip net.ipv6.ip6frag_high_thresh = 4194304 net.ipv6.ip6frag_low_thresh = 3145728 net.ipv6.ip6frag_secret_interval = 600 net.ipv6.ip6frag_time = 60 I couldn't find the variable : net.ipv6.ip_nonlocal_bind ^^ in sysctl . (3) yum provides sysctl -> procps-ng-3.3.10-3.el7.i686 : System and process monitoring utilities Repo : rhelosp-rhel-7.2-server Matched from: Filename : /usr/sbin/sysctl (In reply to Omri Hochman from comment #1) > investigation: > > (2) check available variables: > > [root@undercloud72 ipv6]# /sbin/sysctl -a | grep net.ipv6.ip > net.ipv6.ip6frag_high_thresh = 4194304 > net.ipv6.ip6frag_low_thresh = 3145728 > net.ipv6.ip6frag_secret_interval = 600 > net.ipv6.ip6frag_time = 60 > Checking on virt-environment (undercloud installation finished successfully) >(3) yum provides sysctl -> >procps-ng-3.3.10-3.el7.i686 : System and process monitoring utilities [stack@instack ~]$ sudo /sbin/sysctl -a | grep net.ipv6.ip net.ipv6.ip6frag_high_thresh = 4194304 net.ipv6.ip6frag_low_thresh = 3145728 net.ipv6.ip6frag_secret_interval = 600 net.ipv6.ip6frag_time = 60 net.ipv6.ip_nonlocal_bind = 1 [stack@instack ~]$ rpm -qa | grep procps-ng procps-ng-3.3.10-5.el7_2.x86_64 It seems like on both the virt-env and the BM-env installed : procps-ng-3.3.10-5.el7_2.x86_64 but on the BM-env : (1)the file /proc/sys/net/ipv6/ip_nonlocal_bind - is missing . (2)from sudo /sbin/sysctl - missing net.ipv6.ip_nonlocal_bind = 1 both exist on the virt-env Support for IPv6 non local bind was introduced by kernel-3.10.0-327.28.3.el7.x86_64 so if the undercloud is running a version lower than that it fails. I see that on the BM-env: [stack@undercloud72 ~]$ sudo uname -r 3.10.0-327.el7.x86_64 while on the virt setup: [stack@instack ~]$ uname -r 3.10.0-327.28.3.el7.x86_64 Based on Marius's comments, this looks like a trivial "yum update" and reboot to ensure the latest kernel is running. From a bugfix perspective, we could add a Requires: entry to one of the packages (like instack-undercloud). I hesitate to do that because it's a departure from the RDO packaging (kernel versions aren't necessarily the same scheme). Instead, I'd suggest we update the docs here: https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/paged/director-installation-and-usage/22-undercloud-requirements and add a note or comment like "Latest Red Hat Enterprise Linux 7.2 installed as the host operating system including any updates" Do we now when was that version of kernel shipped? Verified the workaround: Upgraded Kernel to: 3.10.0-327.28.3.el7 + reboot, before starting the undercloud-installation. then undercloud install finished successfully . ================================================================================= Package Arch Version Repository Size ================================================================================= Installing: kernel x86_64 3.10.0-327.28.3.el7 rhelosp-rhel-7.2-z 33 M Installing for dependencies: linux-firmware noarch 20150904-43.git6ebf5d5.el7 rhelosp-rhel-7.2-server 24 M Transaction Summary ================================================================================= Install 1 Package (+1 Dependent package) *** Bug 1370358 has been marked as a duplicate of this bug. *** I am running into this issue during the OSP 8 to 9 upgrade. Undercloud has kernel version 3.10.0-327.36.1.el7.x86_64 and command runs without issue [root@tcpvcpsb1uc6 ~]# /sbin/sysctl net.ipv6.ip_nonlocal_bind net.ipv6.ip_nonlocal_bind = 1 Overcloud compute node is running 3.10.0-327.18.2.el7.x86_64 $ /sbin/sysctl net.ipv6.ip_nonlocal_bind sysctl: cannot stat /proc/sys/net/ipv6/ip_nonlocal_bind: No such file or directory however, has not been rebooted since yum update. Kern version after yum update below: 3.10.0-514.2.2.el7.x86_64 Now returns proper output [heat-admin@tpavcpsb1comp0 ~]$ /sbin/sysctl net.ipv6.ip_nonlocal_bind net.ipv6.ip_nonlocal_bind = 0 Appears to me that a reboot of each overcloud node is required post yum updates, but prior to final step of upgrade to OSP 9. However this verbiage does not appear to be in the documentation, instead a reboot is only sugguested if required. "The update process does not reboot any nodes in the Overcloud automatically. If required, perform a reboot manually after the update command completes. " We are going to try a reboot and attempt to redeploy Haven't we always said that an update to the latest bits of the installed version was required before upgrading? (In reply to Ian Pilcher from comment #13) > Haven't we always said that an update to the latest bits of the installed > version was required before upgrading? If we are saying that, then we should we not make sure that the official doc says that as well? The upgrade document states that we should reboot if required. It does explicitly state that we should reboot. Plus the section on rebooting if required comes after the upgrade process, and in this case, the reboot is required to complete the update. I believe the reboot here is required because of kernel version change. So that is the case of "If required, perform a reboot manually after the update command completes.". We cannot explicitly instruct customers to reboot nodes because we don't know between which states they are transitioning. If customer is already on RHEL 7.3 and perform updates he would not need to reboot. If customer is on 7.2 and it brings him to 7.3 he will need that. What we can to is to provide a side node to provide specific example when the reboot is required (like kernel update from RHEL 7.2 to RHEL 7.3). Also improvement for docs would be to specifically instruct that complete successful update to the latest version and fully operational overcloud are prerequisite of upgrade procedure. Dan could you take care of this docs improvement (I believe it is across all docs version)? Hi, I think it's closely related to https://bugzilla.redhat.com/show_bug.cgi?id=1413199 and that the documentation for both bug should appear at the same spot in the documentation. We should include this matrix somewhere and then add the fact that major rhel upgrade need a reboot of all the nodes. Those rhel major upgrade should most of the time be done during osp minor update. Taken this BZ. Added follow up information and discussion on https://bugzilla.redhat.com/show_bug.cgi?id=1413199 Hi, Chris is that ok if we track this in the aforementioned bugzilla and close this one as duplicate ? That is a good question. This bug is specifically related to a kernel param that is only fixed by updating to kern 3.10.0-327.21.1.el7 or later. If a customer is updating using the latest repos, they should no longer run into this error. So it should be safe to close. However, that being said. The documentation should state that when upgrading, a reboot is required after an updated kernel is installed. Hi Dan, we also need to reboot the nodes if openvswitch get a major version change. Note added to reboot if kernel or Open vSwitch are updated: https://access.redhat.com/documentation/en/red-hat-openstack-platform/10/single/upgrading-red-hat-openstack-platform/#sect-Updating_Director_Packages Sofer, anything further to add to this note? Hi Dan, it should be specified that it is required only if the kernel or openvswitch get a *major* version change. For kernel that means rhel version bump (7.2 -> 7.3), for openvswitch it for change like 2.4 -> 2.5. Minor version upgrade of those two components doesn't require restart. Thanks, Hi Sofer, I think we might have to get the terminology correct, or else we might cause some confusion. From what I've read [1], a major version upgrade means 6.x -> 7.x, while a minor version upgrade means 7.2 -> 7.3. So following this, a reboot would apply to both major and minor version upgrades, but not z-stream updates. Would you agree? [1] https://access.redhat.com/solutions/401413 Hi Sofer, just following up on this. See comment #25. Hi Dan, I meant a change in the kernel version, which happens each time rhel is bumped. So yes, minor/major version upgrade of rhel, but not z-stream. Thanks for the clarification. Thanks, Sofer. I've modified the note to the following: "Major and minor version updates to the kernel or Open vSwitch require a reboot. For example, if your undercloud operating system updates from Red Hat Enterprise Linux 7.2 to 7.3, or Open vSwitch from version 2.4 to 2.5, please reboot. " https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html-single/upgrading_red_hat_openstack_platform/#sect-Updating_Director_Packages How does that sound now? Hi Dan, that's clear enough. Thank you. Thanks, Sofer! |
[OSP-Director][9.0] : undercloud installation fails on net.ipv6.ip_nonlocal_bind ERROR (happens only BM env) . Environment 9.0 (0-day puddle) : --------------------------------- instack-0.0.8-3.el7ost.noarch instack-undercloud-4.0.0-13.el7ost.noarch puppet-3.6.2-4.el7sat.noarch openstack-tripleo-puppet-elements-2.0.0-4.el7ost.noarch openstack-puppet-modules-8.1.8-2.el7ost.noarch openstack-tripleo-heat-templates-liberty-2.0.0-33.el7ost.noarch openstack-tripleo-heat-templates-2.0.0-33.el7ost.noarch python-uri-templates-0.6-5.el7ost.noarch openstack-heat-templates-0-0.8.20150605git.el7ost.noarch Steps : ------- Attempt to deploy undercloud on Bare-Metal environment (last 9.0 puddle) Results : --------- Undercloud installation fails undercloud errors : ------------------ 17:45:51 Warning: Unexpected line: Ring file /etc/swift/account.ring.gz not found, probably it hasn't been written yet 17:45:51 Notice: /Stage[main]/Main/Ring_account_device[192.168.0.1:6002/1]/ensure: created 17:46:02 Notice: /Stage[main]/Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[account]/Exec[rebalance_account]: Triggered 'refresh' from 1 events 17:46:02 Warning: Unexpected line: Ring file /etc/swift/object.ring.gz not found, probably it hasn't been written yet 17:46:02 Notice: /Stage[main]/Main/Ring_object_device[192.168.0.1:6000/1]/ensure: created 17:46:14 Notice: /Stage[main]/Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[object]/Exec[rebalance_object]: Triggered 'refresh' from 1 events 17:46:14 Notice: /Stage[main]/Ironic::Inspector/Ironic_inspector_config[swift/tenant_name]/ensure: created 17:46:14 Notice: /Stage[main]/Ceilometer/Ceilometer_config[oslo_messaging_rabbit/rabbit_use_ssl]/ensure: created 17:46:14 Notice: /Stage[main]/Ironic::Logging/Ironic_config[DEFAULT/debug]/ensure: created 17:46:14 Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat::Fragment[swift-container-6001]/File[/var/lib/puppet/concat/_etc_swift_container-server.conf/fragments/00_swift-container-6001]/ensure: defined content as '{md5}01fecd6e6b4874b5f7667a2def3a7b0e' 17:46:14 Notice: /Stage[main]/Main/Swift::Storage::Filter::Healthcheck[account]/Concat::Fragment[swift_healthcheck_account]/File[/var/lib/puppet/concat/_etc_swift_account-server.conf/fragments/25_swift_healthcheck_account]/ensure: defined content as '{md5}8c92056c41082619d179f88ea15c5fc6' 17:46:20 Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Package[neutron-ovs-agent]/ensure: created 17:46:20 Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created 17:46:20 Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created 17:46:20 Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/enable_tunneling]/ensure: created 17:46:20 Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created 17:46:20 Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created 17:46:21 Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[ovs-cleanup-service]/enable: enable changed 'false' to 'true' 17:46:21 Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created 17:46:21 Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created 17:46:21 Notice: /Stage[main]/Ceilometer::Api/Ceilometer_config[api/port]/ensure: created 17:46:21 Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created 17:46:21 Notice: /Stage[main]/Ironic::Inspector::Logging/Ironic_inspector_config[DEFAULT/debug]/ensure: created 17:46:21 Notice: /Stage[main]/Aodh::Api/Aodh_config[keystone_authtoken/auth_uri]/ensure: created 17:46:26 Notice: /Stage[main]/Aodh::Api/Package[aodh-api]/ensure: created 17:46:26 Notice: /Stage[main]/Main/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created 17:46:26 Error: Could not prefetch sysctl_runtime provider 'sysctl_runtime': sysctl parameter net.ipv6.ip_nonlocal_bind wasn't found on this system 17:46:26 Notice: /Stage[main]/Main/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl_runtime[net.ipv4.ip_nonlocal_bind]/val: val changed '0' to '1' 17:46:26 Notice: /Stage[main]/Main/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created 17:46:26 Error: Execution of '/sbin/sysctl net.ipv6.ip_nonlocal_bind=1' returned 255: sysctl: cannot stat /proc/sys/net/ipv6/ip_nonlocal_bind: No such file or directory 17:46:26 Error: /Stage[main]/Main/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl_runtime[net.ipv6.ip_nonlocal_bind]/val: change from absent to 1 failed: Execution of '/sbin/sysctl net.ipv6.ip_nonlocal_bind=1' returned 255: sysctl: cannot stat /proc/sys/net/ipv6/ip_nonlocal_bind: No such file or directory -- -- -- -- #############the last part############## 17:53:14 Notice: /Stage[main]/Ironic::Keystone::Auth_inspector/Keystone::Resource::Service_identity[ironic-inspector]/Keystone_endpoint[regionOne/ironic-inspector::baremetal-introspection]/ensure: created 17:53:20 Notice: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_endpoint[regionOne/swift::object-store]/ensure: created 17:53:23 Notice: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_endpoint[regionOne/ironic::baremetal]/ensure: created 17:53:28 Notice: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_endpoint[regionOne/ceilometer::metering]/ensure: created 17:53:32 Notice: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_endpoint[regionOne/neutron::network]/ensure: created 17:55:04 Notice: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]: Triggered 'refresh' from 56 events 17:55:07 Notice: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_user[swift]/ensure: created 17:55:10 Notice: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[service]/ensure: created 17:55:10 Notice: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[admin]/description: description changed 'Bootstrap project for initializing the cloud.' to 'admin tenant' 17:55:15 Notice: /Stage[main]/Aodh::Keystone::Auth/Keystone::Resource::Service_identity[aodh]/Keystone_user_role[aodh@service]/ensure: created 17:55:17 Notice: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_user_role[nova@service]/ensure: created 17:55:20 Notice: /Stage[main]/Ironic::Keystone::Auth_inspector/Keystone::Resource::Service_identity[ironic-inspector]/Keystone_user_role[ironic-inspector@service]/ensure: created 17:55:23 Notice: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_user_role[heat@service]/ensure: created 17:55:26 Notice: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_user_role[swift@service]/ensure: created 17:55:30 Notice: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_user_role[ceilometer@service]/ensure: created 17:55:33 Notice: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_user_role[ironic@service]/ensure: created 17:55:35 Notice: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]/password: changed password 17:55:35 Notice: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]/email: defined 'email' as 'root@localhost' 17:55:39 Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_user_domain_name]/ensure: created 17:55:39 Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin_password]/ensure: created 17:55:39 Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin]/ensure: created 17:55:41 Notice: /Stage[main]/Heat::Keystone::Domain/Keystone_domain[heat_stack]/ensure: created 17:55:44 Notice: /Stage[main]/Heat::Keystone::Domain/Keystone_user[heat_admin::heat_stack]/ensure: created 17:55:46 Notice: /Stage[main]/Heat::Keystone::Domain/Keystone_user_role[heat_admin::heat_stack@::heat_stack]/ensure: created 17:55:49 Notice: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user_role[glance@service]/ensure: created 17:55:49 Notice: /Stage[main]/Glance::Registry/Service[glance-registry]/ensure: ensure changed 'stopped' to 'running' 17:55:50 Notice: /Stage[main]/Glance::Api/Service[glance-api]/ensure: ensure changed 'stopped' to 'running' 17:55:53 Notice: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user_role[neutron@service]/ensure: created 17:55:58 Notice: /Stage[main]/Neutron::Server/Service[neutron-server]/ensure: ensure changed 'stopped' to 'running' 17:55:58 Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-service]/ensure: ensure changed 'stopped' to 'running' 17:55:58 Notice: /Stage[main]/Heat/Heat_config[oslo_messaging_rabbit/rabbit_host]/ensure: created 17:55:58 Notice: /Stage[main]/Heat::Deps/Anchor[heat::config::end]: Triggered 'refresh' from 35 events 17:55:58 Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]/ensure: created 17:55:58 Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_user[heat@%]/ensure: created 17:55:58 Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_grant[heat@%/heat.*]/ensure: created 17:55:58 Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_192.168.0.1]/Mysql_user[heat.0.1]/ensure: created 17:55:58 Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_192.168.0.1]/Mysql_grant[heat.0.1/heat.*]/ensure: created 17:55:58 Notice: /Stage[main]/Heat::Deps/Anchor[heat::db::end]: Triggered 'refresh' from 1 events 17:55:58 Notice: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::begin]: Triggered 'refresh' from 1 events 17:56:33 Notice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: Triggered 'refresh' from 3 events 17:56:33 Notice: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::end]: Triggered 'refresh' from 1 events 17:56:33 Notice: /Stage[main]/Heat::Deps/Anchor[heat::service::begin]: Triggered 'refresh' from 3 events 17:56:34 Notice: /Stage[main]/Heat::Api_cfn/Service[heat-api-cfn]/ensure: ensure changed 'stopped' to 'running' 17:56:35 Notice: /Stage[main]/Heat::Engine/Service[heat-engine]/ensure: ensure changed 'stopped' to 'running' 17:56:35 Notice: /Stage[main]/Heat::Api/Service[heat-api]/ensure: ensure changed 'stopped' to 'running' 17:56:35 Notice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Triggered 'refresh' from 3 events 17:56:36 Notice: Finished catalog run in 1767.15 seconds 17:56:42 + rc=6 17:56:42 + set -e 17:56:42 + echo 'puppet apply exited with exit code 6' 17:56:42 puppet apply exited with exit code 6 17:56:42 + '[' 6 '!=' 2 -a 6 '!=' 0 ']' 17:56:42 + exit 6 17:56:42 [2016-06-28 14:23:12,987] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 6] 17:56:42 17:56:42 [2016-06-28 14:23:12,988] (os-refresh-config) [ERROR] Aborting... 17:56:42 Traceback (most recent call last): 17:56:42 File "<string>", line 1, in <module> 17:56:42 File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 845, in install 17:56:42 _run_orc(instack_env) 17:56:42 File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 735, in _run_orc 17:56:42 _run_live_command(args, instack_env, 'os-refresh-config') 17:56:42 File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 406, in _run_live_command 17:56:42 raise RuntimeError('%s failed. See log for details.' % name) 17:56:42 RuntimeError: os-refresh-config failed. See log for details. 17:56:42 Command 'instack-install-undercloud' returned non-zero exit status 1 17:56:42 Failed to deploy undercloud 17:56:42 Build step 'Virtualenv Builder' marked build as failure 17:56:42 Build step 'Groovy Postbuild' marked build as failure 17:56:42 Build step 'Groovy Postbuild' marked build as failure 17:56:42 [BFA] Scanning build for known causes... 17:56:42 [BFA] Found failure cause(s): 17:56:42 [BFA] instack-install-undercloud returned a non-zero status 17:56:42 [BFA] PUPPET apply error from category Puppet 17:56:42 [BFA] Done. 0s 17:56:42 Started calculate disk usage of build 17:56:42 Finished Calculation of disk usage of build in 0 seconds 17:56:42 Started calculate disk usage of workspace 17:56:43 Finished Calculation of disk usage of workspace in 0 seconds 17:56:43 Finished: FAILURE