[OSP-director-9.0][backwards compatibility] After 7.3 -> 8.0 undercloud upgrade was successful, the following 8.0 -> 9.0 undercloud upgrade fails. Environment: ------------- instack-undercloud-4.0.0-5.el7ost.noarch instack-0.0.8-3.el7ost.noarch openstack-tripleo-puppet-elements-2.0.0-2.el7ost.noarch puppet-3.6.2-2.el7.noarch openstack-puppet-modules-8.1.2-1.el7ost.noarch openstack-tripleo-heat-templates-2.0.0-12.el7ost.noarch openstack-heat-engine-6.0.0-6.el7ost.noarch python-heatclient-1.2.0-1.el7ost.noarch openstack-tripleo-heat-templates-kilo-2.0.0-12.el7ost.noarch openstack-heat-common-6.0.0-6.el7ost.noarch openstack-heat-api-6.0.0-6.el7ost.noarch heat-cfntools-1.3.0-2.el7ost.noarch openstack-tripleo-heat-templates-liberty-2.0.0-12.el7ost.noarch openstack-heat-api-cfn-6.0.0-6.el7ost.noarch openstack-heat-templates-0-0.8.20150605git.el7ost.noarch openstack-heat-api-cloudwatch-6.0.0-6.el7ost.noarch Scenario: --------- (1) Deploy Underclou and Overcloud in latest 7.3 Async (2) Upgraded the undercloud to 8.0Async (use: ' sudo rhos-release -P 8-director -r 7.2') *this step should finish successfully ^ (3) Upgraded the undercloud to latest 9.0 (use: sudo rhos-release -P 9-director) Results : ---------- After 7.3 -> 8.0 undercloud upgrade was successful, the following 8.0 -> 9.0 undercloud upgrade fails. [2016-07-06 15:53:42,853] (os-refresh-config) [ERROR] Aborting... Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 845, in install _run_orc(instack_env) File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 735, in _run_orc _run_live_command(args, instack_env, 'os-refresh-config') File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 406, in _run_live_command raise RuntimeError('%s failed. See log for details.' % name) RuntimeError: os-refresh-config failed. See log for details. Command 'instack-install-undercloud' returned non-zero exit status 1 There was an error running openstack undercloud upgrade. Exiting.... Undedcloud upgrade 8.0Async to 9.0 Console output: ---------------- Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not restart Service[httpd]: Execution of '/bin/systemctl restart httpd' returned 1: Job for httpd.service failed because the control process exited with error code. See "systemctl status httpd.service" and "journalctl -xe" for details. Error: /Stage[main]/Apache::Service/Service[httpd]: Could not restart Service[httpd]: Execution of '/bin/systemctl restart httpd' returned 1: Job for httpd.service failed because the control process exited with error code. See "systemctl status httpd.service" and "journalctl -xe" for details. Wrapped exception: Execution of '/bin/systemctl restart httpd' returned 1: Job for httpd.service failed because the control process exited with error code. See "systemctl status httpd.service" and "journalctl -xe" for details. Notice: /Stage[main]/Keystone::Deps/Anchor[keystone::service::end]: Triggered 'refresh' from 1 events Broadcast message from systemd-journald (Wed 2016-07-06 15:36:37 EDT): haproxy[27494]: proxy keystone_admin has no server available! Broadcast message from systemd-journald (Wed 2016-07-06 15:36:37 EDT): haproxy[27494]: proxy keystone_public has no server available! Error: Could not prefetch keystone_service provider 'openstack': Execution of '/bin/openstack service list --quiet --format csv --long' returned 1: Unable to establish connection to http://192.0.2.1:35357/v3/services Error: Not managing Keystone_service[Image Service] due to earlier Keystone API failures. Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_service[Image Service::image]/ensure: change from absent to present failed: Not managing Keystone_service[Image Service] due to earlier Keystone API failures. Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user[neutron]: Could not evaluate: Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Unable to establish connection to http://192.0.2.1:35357/v3/domains Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_user[heat]: Could not evaluate: Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Unable to establish connection to http://192.0.2.1:35357/v3/domains Error: Could not prefetch keystone_role provider 'openstack': Execution of '/bin/openstack role list --quiet --format csv' returned 1: Unable to establish connection to http://192.0.2.1:35357/v3/roles Error: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_role[ResellerAdmin]/ensure: change from absent to present failed: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. Error: Not managing Keystone_service[ironic] due to earlier Keystone API failures. Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_service[ironic::baremetal]/ensure: change from absent to present failed: Not managing Keystone_service[ironic] due to earlier Keystone API failures. Error: /Stage[main]/Aodh::Keystone::Auth/Keystone::Resource::Service_identity[aodh]/Keystone_user[aodh]: Could not evaluate: Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Unable to establish connection to http://192.0.2.1:35357/v3/domains Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_user[nova]: Could not evaluate: Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Unable to establish connection to http://192.0.2.1:35357/v3/domains Error: Not managing Keystone_service[aodh] due to earlier Keystone API failures. Error: /Stage[main]/Aodh::Keystone::Auth/Keystone::Resource::Service_identity[aodh]/Keystone_service[aodh::alarming]/ensure: change from absent to present failed: Not managing Keystone_service[aodh] due to earlier Keystone API failures. Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user[glance]: Could not evaluate: Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Unable to establish connection to http://192.0.2.1:35357/v3/domains Error: Not managing Keystone_service[novav3] due to earlier Keystone API failures. Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova v3 service, user novav3]/Keystone_service[novav3::computev3]/ensure: change from absent to present failed: Not managing Keystone_service[novav3] due to earlier Keystone API failures. Error: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures. Error: /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_user]/ensure: change from absent to present failed: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures. Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_user[ironic]: Could not evaluate: Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Unable to establish connection to http://192.0.2.1:35357/v3/domains Error: Not managing Keystone_service[nova] due to earlier Keystone API failures. Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_service[nova::compute]/ensure: change from absent to present failed: Not managing Keystone_service[nova] due to earlier Keystone API failures. Error: Not managing Keystone_role[swiftoperator] due to earlier Keystone API failures. Error: /Stage[main]/Swift::Keystone::Auth/Keystone_role[swiftoperator]/ensure: change from absent to present failed: Not managing Keystone_role[swiftoperator] due to earlier Keystone API failures. Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_user[ceilometer]: Could not evaluate: Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Unable to establish connection to http://192.0.2.1:35357/v3/domains Error: Not managing Keystone_service[neutron] due to earlier Keystone API failures. Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_service[neutron::network]/ensure: change from absent to present failed: Not managing Keystone_service[neutron] due to earlier Keystone API failures. Error: Not managing Keystone_service[ceilometer] due to earlier Keystone API failures. Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_service[ceilometer::metering]/ensure: change from absent to present failed: Not managing Keystone_service[ceilometer] due to earlier Keystone API failures. Error: /Stage[main]/Ironic::Keystone::Auth_inspector/Keystone::Resource::Service_identity[ironic-inspector]/Keystone_user[ironic-inspector]: Could not evaluate: Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Unable to establish connection to http://192.0.2.1:35357/v3/domains Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_status_changes]: Dependency Keystone_user[nova] has failures: true Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_status_changes]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_type]: Dependency Keystone_user[nova] has failures: true Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_type]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/region_name]: Dependency Keystone_user[nova] has failures: true Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/region_name]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_name]: Dependency Keystone_user[nova] has failures: true Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_name]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/send_events_interval]: Dependency Keystone_user[nova] has failures: true Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/send_events_interval]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/username]: Dependency Keystone_user[nova] has failures: true Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/username]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/password]: Dependency Keystone_user[nova] has failures: true Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/password]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_id]: Dependency Keystone_user[nova] has failures: true Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_id]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_data_changes]: Dependency Keystone_user[nova] has failures: true Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_data_changes]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/tenant_name]: Dependency Keystone_user[nova] has failures: true Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/tenant_name]: Skipping because of failed dependencies Error: Not managing Keystone_service[swift] due to earlier Keystone API failures. Error: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_service[swift::object-store]/ensure: change from absent to present failed: Not managing Keystone_service[swift] due to earlier Keystone API failures. Error: Not managing Keystone_service[keystone] due to earlier Keystone API failures. Error: /Stage[main]/Keystone::Endpoint/Keystone::Resource::Service_identity[keystone]/Keystone_service[keystone::identity]/ensure: change from absent to present failed: Not managing Keystone_service[keystone] due to earlier Keystone API failures. Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_id]: Dependency Keystone_user[nova] has failures: true Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_id]: Skipping because of failed dependencies Error: Not managing Keystone_service[ironic-inspector] due to earlier Keystone API failures. Error: /Stage[main]/Ironic::Keystone::Auth_inspector/Keystone::Resource::Service_identity[ironic-inspector]/Keystone_service[ironic-inspector::baremetal-introspection]/ensure: change from absent to present failed: Not managing Keystone_service[ironic-inspector] due to earlier Keystone API failures. Error: Not managing Keystone_service[heat] due to earlier Keystone API failures. Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_service[heat::orchestration]/ensure: change from absent to present failed: Not managing Keystone_service[heat] due to earlier Keystone API failures. Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_url]: Dependency Keystone_user[nova] has failures: true Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_url]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/nova_url]: Dependency Keystone_user[nova] has failures: true Warning: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/nova_url]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]: Dependency Keystone_user[nova] has failures: true Warning: /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]: Skipping because of failed dependencies Notice: /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]: Triggered 'refresh' from 50 events Error: Could not prefetch keystone_endpoint provider 'openstack': Execution of '/bin/openstack endpoint list --quiet --format csv' returned 1: Unable to establish connection to http://192.0.2.1:35357/v3/endpoints Notice: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_endpoint[regionOne/Image Service::image]: Dependency Keystone_service[keystone::identity] has failures: true Notice: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_endpoint[regionOne/Image Service::image]: Dependency Keystone_service[heat::orchestration] has failures: true Notice: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_endpoint[regionOne/Image Service::image]: Dependency Keystone_service[neutron::network] has failures: true Notice: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_endpoint[regionOne/Image Service::image]: Dependency Keystone_service[Image Service::image] has failures: true Notice: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_endpoint[regionOne/Image Service::image]: Dependency Keystone_service[nova::compute] has failures: true Notice: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_endpoint[regionOne/Image Service::image]: Dependency Keystone_service[novav3::computev3] has failures: true Notice: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_endpoint[regionOne/Image Service::image]: Dependency Keystone_service[ceilometer::metering] has failures: true Notice: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_endpoint[regionOne/Image Service::image]: Dependency Keystone_service[aodh::alarming] has failures: true Notice: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_endpoint[regionOne/Image Service::image]: Dependency Keystone_service[swift::object-store] has failures: true Notice: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_endpoint[regionOne/Image Service::image]: Dependency Keystone_service[ironic::baremetal] has failures: true Notice: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_endpoint[regionOne/Image Service::image]: Dependency Keystone_service[ironic-inspector::baremetal-introspection] has failures: true Warning: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_endpoint[regionOne/Image Service::image]: Skipping because of failed dependencies Warning: /Stage[main]/Heat::Engine/Service[heat-engine]: Skipping because of failed dependencies Notice: /Stage[main]/Heat::Engine/Service[heat-engine]: Triggered 'refresh' from 1 events Notice: /Stage[main]/Heat::Api/Service[heat-api]: Dependency Keystone_tenant[service] has failures: true Notice: /Stage[main]/Heat::Api/Service[heat-api]: Dependency Keystone_tenant[admin] has failures: true Notice: /Stage[main]/Heat::Api/Service[heat-api]: Dependency Keystone_role[admin] has failures: true Notice: /Stage[main]/Heat::Api/Service[heat-api]: Dependency Keystone_user[admin] has failures: true Warning: /Stage[main]/Heat::Api/Service[heat-api]: Skipping because of failed dependencies Notice: /Stage[main]/Heat::Api/Service[heat-api]: Triggered 'refresh' from 1 events Notice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Dependency Keystone_tenant[service] has failures: true Notice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Dependency Keystone_tenant[admin] has failures: true Notice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Dependency Keystone_role[admin] has failures: true Notice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Dependency Keystone_user[admin] has failures: true Warning: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Skipping because of failed dependencies Notice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Triggered 'refresh' from 3 events Notice: Finished catalog run in 1543.67 seconds + rc=6 + set -e + echo 'puppet apply exited with exit code 6' puppet apply exited with exit code 6 + '[' 6 '!=' 2 -a 6 '!=' 0 ']' + exit 6 [2016-07-06 15:53:42,852] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 6] [2016-07-06 15:53:42,853] (os-refresh-config) [ERROR] Aborting... Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 845, in install _run_orc(instack_env) File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 735, in _run_orc _run_live_command(args, instack_env, 'os-refresh-config') File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 406, in _run_live_command raise RuntimeError('%s failed. See log for details.' % name) RuntimeError: os-refresh-config failed. See log for details. Command 'instack-install-undercloud' returned non-zero exit status 1 There was an error running openstack undercloud upgrade. Exiting.... ----------------------------------------------------------------------------- [stack@instack ci]$ systemctl status httpd.service ---------------------------------------------------- ● httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled) Drop-In: /usr/lib/systemd/system/httpd.service.d └─openstack-dashboard.conf Active: failed (Result: exit-code) since Wed 2016-07-06 15:36:34 EDT; 22min ago Docs: man:httpd(8) man:apachectl(8) Process: 9332 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=0/SUCCESS) Process: 9365 ExecStartPre=/usr/bin/python /usr/share/openstack-dashboard/manage.py compress --force (code=exited, status=1/FAILURE) Process: 9340 ExecStartPre=/usr/bin/python /usr/share/openstack-dashboard/manage.py collectstatic --noinput --clear (code=exited, status=0/SUCCESS) Main PID: 27589 (code=exited, status=0/SUCCESS) journalctl -xe: ----------------- - Logs begin at Wed 2016-07-06 15:10:11 EDT, end at Wed 2016-07-06 15:26:24 EDT. -- Jul 06 15:10:11 instack.localdomain sudo[20602]: stack : TTY=pts/0 ; PWD=/home/stack/rhos-qe-core-installer/tripleo/ci ; USER=root ; COMMAND=/bin/os-refres Jul 06 15:17:05 instack.localdomain sudo[29048]: stack : TTY=pts/0 ; PWD=/home/stack/rhos-qe-core-installer/tripleo/ci ; USER=root ; COMMAND=/bin/cp /root/ Jul 06 15:17:05 instack.localdomain sudo[29050]: stack : TTY=pts/0 ; PWD=/home/stack/rhos-qe-core-installer/tripleo/ci ; USER=root ; COMMAND=/bin/chown sta Jul 06 15:17:05 instack.localdomain sudo[29052]: stack : TTY=pts/0 ; PWD=/home/stack/rhos-qe-core-installer/tripleo/ci ; USER=root ; COMMAND=/bin/hiera adm Jul 06 15:17:08 instack.localdomain sudo[29083]: stack : TTY=pts/0 ; PWD=/home/stack/rhos-qe-core-installer/tripleo/ci ; USER=root ; COMMAND=/bin/rm -f /tm Jul 06 15:17:08 instack.localdomain sudo[29085]: stack : TTY=pts/0 ; PWD=/home/stack/rhos-qe-core-installer/tripleo/ci ; USER=root ; COMMAND=/bin/systemctl Jul 06 15:18:41 instack.localdomain sudo[29696]: stack : TTY=pts/0 ; PWD=/home/stack/rhos-qe-core-installer/tripleo/ci ; USER=root ; COMMAND=/bin/hiera adm Jul 06 15:18:59 instack.localdomain sudo[29830]: stack : TTY=pts/0 ; PWD=/home/stack/rhos-qe-core-installer/tripleo/ci ; USER=root ; COMMAND=/bin/hiera adm Jul 06 15:18:59 instack.localdomain sudo[29833]: stack : TTY=pts/0 ; PWD=/home/stack/rhos-qe-core-installer/tripleo/ci ; USER=root ; COMMAND=/bin/rhos-rele Jul 06 15:19:01 instack.localdomain sudo[29930]: stack : TTY=pts/0 ; PWD=/home/stack/rhos-qe-core-installer/tripleo/ci ; USER=root ; COMMAND=/bin/yum clean Jul 06 15:19:01 instack.localdomain sudo[29932]: stack : TTY=pts/0 ; PWD=/home/stack/rhos-qe-core-installer/tripleo/ci ; USER=root ; COMMAND=/bin/yum updat Jul 06 15:25:51 instack.localdomain sudo[3054]: stack : TTY=pts/0 ; PWD=/home/stack/rhos-qe-core-installer/tripleo/ci ; USER=root ; COMMAND=/bin/yum update Jul 06 15:25:52 instack.localdomain sudo[3072]: stack : TTY=pts/0 ; PWD=/home/stack/rhos-qe-core-installer/tripleo/ci ; USER=root ; COMMAND=/bin/hostnamect Jul 06 15:25:52 instack.localdomain sudo[3078]: stack : TTY=pts/0 ; PWD=/home/stack/rhos-qe-core-installer/tripleo/ci ; USER=root ; COMMAND=/bin/hostnamect Jul 06 15:25:52 instack.localdomain sudo[3080]: stack : TTY=pts/0 ; PWD=/home/stack/rhos-qe-core-installer/tripleo/ci ; USER=root ; COMMAND=/bin/rm -rf /us Jul 06 15:25:52 instack.localdomain sudo[3082]: stack : TTY=pts/0 ; PWD=/home/stack/rhos-qe-core-installer/tripleo/ci ; USER=root ; COMMAND=/bin/instack -p Jul 06 15:26:24 instack.localdomain sudo[4888]: stack : TTY=pts/0 ; PWD=/home/stack/rhos-qe-core-installer/tripleo/ci ; USER=root ; COMMAND=/bin/os-refresh lines 1-18/18 (END) ----------------------------------------------------------------------------- Keystone.log - -------------- 2016-07-06 15:36:27.315 27592 ERROR keystone.common.wsgi 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters [req-e4a90040-5ec7-4b24-ab2a-eab3b145bd7b - - - - -] DBAPIError exception wrapped from (pymysql.err.InternalError) (1054, u"Unknown column 'user.name' in 'field list'") [SQL: u'SELECT user.id AS user_id, user.name AS user_name, user.domain_id AS user_domain_id, user.password AS user_password, user.enabled AS user_enabled, user.extra AS user_extra, user.default_project_id AS user_default_project_id \nFROM user \nWHERE user.id = %s'] [parameters: (u'4416db2926b94ae5872db041077be829',)] 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters Traceback (most recent call last): 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters context) 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters cursor.execute(statement, parameters) 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 146, in execute 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters result = self._query(query) 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 296, in _query 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters conn.query(q) 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 781, in query 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters self._affected_rows = self._read_query_result(unbuffered=unbuffered) 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 942, in _read_query_result 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters result.read() 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1138, in read 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters first_packet = self.connection._read_packet() 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 906, in _read_packet 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters packet.check_error() 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 367, in check_error 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters err.raise_mysql_exception(self._data) 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python2.7/site-packages/pymysql/err.py", line 120, in raise_mysql_exception 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters _check_mysql_exception(errinfo) 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/lib/python2.7/site-packages/pymysql/err.py", line 115, in _check_mysql_exception 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters raise InternalError(errno, errorvalue) 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters InternalError: (1054, u"Unknown column 'user.name' in 'field list'") 2016-07-06 15:36:27.425 27593 ERROR oslo_db.sqlalchemy.exc_filters 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi [req-e4a90040-5ec7-4b24-ab2a-eab3b145bd7b - - - - -] (pymysql.err.InternalError) (1054, u"Unknown column 'user.name' in 'field list'") [SQL: u'SELECT user.id AS user_id, user.name AS user_name, user.domain_id AS user_domain_id, user.password AS user_password, user.enabled AS user_enabled, user.extra AS user_extra, user.default_project_id AS user_default_project_id \nFROM user \nWHERE user.id = %s'] [parameters: (u'4416db2926b94ae5872db041077be829',)] 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi Traceback (most recent call last): 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 248, in __call__ 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi try: 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/contrib/ec2/controllers.py", line 383, in authenticate 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi (user_ref, project_ref, metadata_ref, roles_ref, 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/contrib/ec2/controllers.py", line 130, in _authenticate 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi # TODO(termie): don't create new tokens every time 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 433, in wrapper 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi return f(self, *args, **kwargs) 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 444, in wrapper 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi except exception.PublicIDNotFound as e: 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 1053, in decorate 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi should_cache_fn) 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 657, in get_or_create 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi async_creator) as value: 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 158, in __enter__ 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi return self._enter() 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 98, in _enter 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi generated = self._enter_create(createdtime) 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 149, in _enter_create 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi created = self.creator() 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 625, in gen_value 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi created_value = creator() 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 1049, in creator 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi return fn(*arg, **kw) 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 848, in get_user 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi self._get_domain_driver_and_entity_id(user_id)) 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/identity/backends/sql.py", line 135, in get_user 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/identity/backends/sql.py", line 128, in _get_user 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi __tablename__ = 'password' 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 819, in get 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi :param ident: A scalar or tuple value representing 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 852, in _get_impl 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi self._for_update_arg is None: 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/loading.py", line 219, in load_on_ident 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi return q.one() 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2473, in one 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi self._limit = stop 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2516, in __iter__ 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi the Postgresql dialect will render a ``DISTINCT ON (<expressions>>)`` 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2531, in _execute_and_instances 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi """Apply the prefixes to the query and return the newly resulting 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 914, in execute 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi return meth(self, multiparams, params) 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 323, in _execute_on_connection 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi return connection._execute_clauseelement(self, multiparams, params) 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1010, in _execute_clauseelement 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi compiled_sql, distilled_params 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1146, in _execute_context 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi context) 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1337, in _handle_dbapi_exception 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi util.raise_from_cause(newraise, exc_info) 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi exc_type, exc_value, exc_tb = exc_info 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi context) 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi cursor.execute(statement, parameters) 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 146, in execute 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi result = self._query(query) 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 296, in _query 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi conn.query(q) 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 781, in query 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi self._affected_rows = self._read_query_result(unbuffered=unbuffered) 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 942, in _read_query_result 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi result.read() 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1138, in read 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi first_packet = self.connection._read_packet() 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 906, in _read_packet 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi packet.check_error() 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 367, in check_error 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi err.raise_mysql_exception(self._data) 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/pymysql/err.py", line 120, in raise_mysql_exception 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi _check_mysql_exception(errinfo) 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/pymysql/err.py", line 115, in _check_mysql_exception 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi raise InternalError(errno, errorvalue) 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi DBError: (pymysql.err.InternalError) (1054, u"Unknown column 'user.name' in 'field list'") [SQL: u'SELECT user.id AS user_id, user.name AS user_name, user.domain_id AS user_domain_id, user.password AS user_password, user.enabled AS user_enabled, user.extra AS user_extra, user.default_project_id AS user_default_project_id \nFROM user \nWHERE user.id = %s'] [parameters: (u'4416db2926b94ae5872db041077be829',)] 2016-07-06 15:36:27.429 27593 ERROR keystone.common.wsgi 2016-07-06 15:36:29.076 9220 DEBUG oslo_db.sqlalchemy.engines [req-ea824df0-ea1f-4371-8974-65e8e7838855 - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:256
Marios - We believe this might be related to another issue you're looking at (sorry, I don't have the actual number accessible). Can you take a look?
I think bug 1351712 and bug 1353346 are related - they are both about a failed undercloud upgrade for 8..9 and afaics both have the same root symptom, that httpd fails to come up on the undercloud during the upgrade. I think it makes sense to keep both as the issue manifests in slightly different circumstances; update 8.. 8 latest and then upgrade to 9, or upgrade 7..8, and then do the 8..9 upgrade. I see from logs/description that the root cause is httpd not coming up as part of the upgrade. I can't see enough information, either in the install-undercloud.log from bug 1351712 or in the description of bug 1353346. Basically we need the httpd logs. I think the rest of the errors from the trace (e.g. keystone related errors) are a consequence of the httpd not starting. Can we please have the httpd logs from the undercloud when this happens? Another thought is, I suspect that incorporating a stop on all undercloud services like at https://review.openstack.org/#/c/331804/ before invoking the "openstack undercloud upgrade" might solve this problem. You could try this if you can reproduce on an enviroonment. Otherwise needs logs/more info. It could yet be another root cause, but if the service stop before upgrade works we can land that to unblock us on these two bugs. For clarity, before "openstack undercloud upgrade" stop services (this has been my workflow for all 8..9 upgrades testing for the undercloud): sudo rm -rf /etc/yum.repos.d/* sudo rhos-release 9-director -d sudo rhos-release 9 -d sudo yum clean all && sudo yum clean metadata && sudo yum clean dbcache && sudo yum makecache sudo yum -y update sudo systemctl stop openstack-* sudo systemctl stop neutron-* openstack undercloud upgrade thanks, marios
we were testing the scenario : upgrade from 7.3 -> 8.0 -> 9.0 and I encountered another issue that prevent us from verifying this bug. while the upgrade from 7.3 to 8.0 finished successfully, the following upgrade (of the same environment) from 8.0 to 9.0 failed -, it failed on the undercloud upgrade phase . (we've tried both with and without SSL) it seems to have some Error with httpd: 07:43:32 Error: /Stage[main]/Apache::Service/Service[httpd]: Failed to call refresh: Could not restart Service[httpd]: Execution of '/bin/systemctl restart httpd' returned 1: Warning: httpd.service changed on disk. Run 'systemctl daemon-reload' to reload units. 07:43:32 Job for httpd.service failed because the control process exited with error code. See "systemctl status httpd.service" and "journalctl -xe" for details. 07:43:32 Error: /Stage[main]/Apache::Service/Service[httpd]: Could not restart Service[httpd]: Execution of '/bin/systemctl restart httpd' returned 1: Warning: httpd.service changed on disk. Run 'systemctl daemon-reload' to reload units. 07:43:32 Job for httpd.service failed because the control process exited with error code. See "systemctl status httpd.service" and "journalctl -xe" for details. 07:43:32 Wrapped exception: 07:43:32 Execution of '/bin/systemctl restart httpd' returned 1: Warning: httpd.service changed on disk. Run 'systemctl daemon-reload' to reload units. 07:43:32 Job for httpd.service failed because the control process exited with error code. See "systemctl status httpd.service" and "journalctl -xe" for details. 07:43:32 Notice: /Stage[main]/Keystone::Deps/Anchor[keystone::service::end]: Triggered 'refresh' from 2 events 07:46:13 Error: Could not prefetch keystone_service provider 'openstack': Execution of '/bin/openstack service list --quiet --format csv --long' returned 1: Unable to establish connection to http://192.168.0.1:35357/v3/services (tried 37, for a total of 170 seconds) 07:46:13 Error: Not managing Keystone_service[Image Service] due to earlier Keystone API failures. 07:46:13 Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_service[Image Service::image]/ensure: change from absent to present failed: Not managing Keystone_service[Image Service] due to earlier Keystone API failures. 07:48:55 Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user[neutron]: Could not evaluate: Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Unable to establish connection to http://192.168.0.1:35357/v3/domains (tried 38, for a total of 170 seconds) 07:51:33 Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_user[heat]: Could not evaluate: Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Unable to establish connection to http://192.168.0.1:35357/v3/domains (tried 38, for a total of 170 seconds) 07:54:11 Error: Could not prefetch keystone_role provider 'openstack': Execution of '/bin/openstack role list --quiet --format csv' returned 1: Unable to establish connection to http://192.168.0.1:35357/v3/roles (tried 37, for a total of 170 seconds) 07:54:11 Error: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. 07:54:11 Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_role[ResellerAdmin]/ensure: change from absent to present failed: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. 07:54:11 Error: Not managing Keystone_service[ironic] due to earlier Keystone API failures. 07:54:11 Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_service[ironic::baremetal]/ensure: change from absent to present failed: Not managing Keystone_service[ironic] due to earlier Keystone API failures. 07:56:51 Error: /Stage[main]/Aodh::Keystone::Auth/Keystone::Resource::Service_identity[aodh]/Keystone_user[aodh]: Could not evaluate: Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Unable to establish connection to http://192.168.0.1:35357/v3/domains (tried 36, for a total of 170 seconds) 07:59:31 Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_user[nova]: Could not evaluate: Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Unable to establish connection to http://192.168.0.1:35357/v3/domains (tried 37, for a total of 170 seconds) 07:59:31 Error: Not managing Keystone_service[aodh] due to earlier Keystone API failures. 07:59:31 Error: /Stage[main]/Aodh::Keystone::Auth/Keystone::Resource::Service_identity[aodh]/Keystone_service[aodh::alarming]/ensure: change from absent to present failed: Not managing Keystone_service[aodh] due to earlier Keystone API failures. 08:02:11 Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user[glance]: Could not evaluate: Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Unable to establish connection to http://192.168.0.1:35357/v3/domains (tried 38, for a total of 170 seconds) 08:02:11 Error: Not managing Keystone_service[novav3] due to earlier Keystone API failures. 08:02:11 Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova v3 service, user novav3]/Keystone_service[novav3::computev3]/ensure: change from absent to present failed: Not managing Keystone_service[novav3] due to earlier Keystone API failures. 08:02:11 Error: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures. 08:02:11 Error: /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_user]/ensure: change from absent to present failed: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures. you can see more info on the link: https://rhos-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/Director/view/7.x/job/BM_rhos18_Upgrade_7.3_to_8.0_to_9.0_noSSL/lastFailedBuild/consoleFull
Reproduced the issue.
[root@instack ~]# journalctl -u httpd -- Logs begin at Tue 2016-08-02 17:31:18 EDT, end at Wed 2016-08-03 10:12:19 EDT. -- Aug 02 17:42:10 instack.localdomain systemd[1]: Starting The Apache HTTP Server... Aug 02 17:42:10 instack.localdomain python[28417]: WARNING:root:"dashboards" and "default_dashboard" in (local_)settings is DEPRECATED now and may be unsupported in some future release. The preferred way to specif Aug 02 17:42:11 instack.localdomain python[28417]: 0 static files copied to '/usr/share/openstack-dashboard/static', 848 unmodified. Aug 02 17:42:11 instack.localdomain python[28427]: WARNING:root:"dashboards" and "default_dashboard" in (local_)settings is DEPRECATED now and may be unsupported in some future release. The preferred way to specif Aug 02 17:42:11 instack.localdomain python[28427]: WARNING:py.warnings:RemovedInDjango19Warning: "requires_model_validation" is deprecated in favor of "requires_system_checks". Aug 02 17:42:11 instack.localdomain python[28427]: WARNING:py.warnings:RemovedInDjango19Warning: SortedDict is deprecated and will be removed in Django 1.9. Aug 02 17:42:11 instack.localdomain python[28427]: WARNING:py.warnings:RemovedInDjango19Warning: Loading the `url` tag from the `future` library is deprecated and will be removed in Django 1.9. Use the default `ur Aug 02 17:42:12 instack.localdomain python[28427]: WARNING:py.warnings:RemovedInDjango19Warning: SortedDict is deprecated and will be removed in Django 1.9. Aug 02 17:42:12 instack.localdomain python[28427]: ERROR:scss.expression:Function not found: twbs-font-path:1 Aug 02 17:42:13 instack.localdomain python[28427]: ERROR:scss.expression:Function not found: twbs-font-path:1 Aug 02 17:42:13 instack.localdomain python[28427]: ERROR:scss.expression:Function not found: twbs-font-path:1 Aug 02 17:42:13 instack.localdomain python[28427]: ERROR:scss.expression:Function not found: twbs-font-path:1 Aug 02 17:42:13 instack.localdomain python[28427]: ERROR:scss.expression:Function not found: twbs-font-path:1 Aug 02 17:42:13 instack.localdomain python[28427]: ERROR:scss.expression:Function not found: twbs-font-path:1 Aug 02 17:42:13 instack.localdomain python[28427]: ERROR:scss.expression:Function not found: twbs-font-path:1 Aug 02 17:42:17 instack.localdomain python[28427]: ERROR:scss.expression:Function not found: twbs-font-path:1 Aug 02 17:42:17 instack.localdomain python[28427]: ERROR:scss.expression:Function not found: twbs-font-path:1 Aug 02 17:42:17 instack.localdomain python[28427]: ERROR:scss.expression:Function not found: twbs-font-path:1 Aug 02 17:42:17 instack.localdomain python[28427]: ERROR:scss.expression:Function not found: twbs-font-path:1 Aug 02 17:42:17 instack.localdomain python[28427]: ERROR:scss.expression:Function not found: twbs-font-path:1 Aug 02 17:42:17 instack.localdomain python[28427]: ERROR:scss.expression:Function not found: twbs-font-path:1 Aug 02 17:42:18 instack.localdomain python[28427]: ERROR:scss.expression:Function not found: twbs-font-path:1 Aug 02 17:42:19 instack.localdomain python[28427]: Found 'compress' tags in: Aug 02 17:42:19 instack.localdomain python[28427]: /usr/lib/python2.7/site-packages/horizon/templates/horizon/_conf.html Aug 02 17:42:19 instack.localdomain python[28427]: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/index.html Aug 02 17:42:19 instack.localdomain python[28427]: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/undeploy_confirmation.html Aug 02 17:42:19 instack.localdomain python[28427]: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/scale_out.html Aug 02 17:42:19 instack.localdomain python[28427]: /usr/share/openstack-dashboard/openstack_dashboard/dashboards/theme/templates/_stylesheets.html Aug 02 17:42:19 instack.localdomain python[28427]: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/overview/deploy_confirmation.html Aug 02 17:42:19 instack.localdomain python[28427]: /usr/lib/python2.7/site-packages/tuskar_ui/infrastructure/templates/infrastructure/_workflow_base.html Aug 02 17:42:19 instack.localdomain python[28427]: /usr/lib/python2.7/site-packages/tuskar_boxes/templates/tuskar_boxes/overview/index.html ...skipping... Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/metadata/tree/metadata-tree-item.html' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/metadata/tree/metadata-tree-item.directive.js' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/table/hz-no-items.directive.spec.js' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/table/hz-table.directive.js' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/table/hz-select.directive.js' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/table/hz-select-all.directive.js' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/table/table.spec.js' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/table/st-table.mock.html' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/table/hz-no-items.html' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/table/table.mock.html' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/table/hz-expand-detail.directive.js' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/table/hz-no-items.directive.js' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/table/table.controller.js' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/table/hz-table-footer.directive.js' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/table/search-bar.spec.js' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/table/no-items.mock.html' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/table/table.module.js' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/table/table.scss' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/table/hz-search-bar.directive.js' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/table/hz-table-footer.html' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/table/search-bar.html' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/modal-wait-spinner/modal-wait-spinner.module.js' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/modal-wait-spinner/modal-wait-spinner.scss' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/modal-wait-spinner/modal-wait-spinner.spec.js' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/modal-wait-spinner/modal-wait-spinner.directive.js' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/modal-wait-spinner/modal-wait-spinner.service.js' Aug 02 21:16:39 instack.localdomain python[21290]: Copying '/usr/lib/python2.7/site-packages/horizon/static/framework/widgets/modal/simple-modal.spec.js' Aug 02 21:16:41 instack.localdomain python[21313]: Compressing... Aug 02 21:16:41 instack.localdomain systemd[1]: httpd.service: control process exited, code=exited status=1 Aug 02 21:16:41 instack.localdomain systemd[1]: Failed to start The Apache HTTP Server. Aug 02 21:16:41 instack.localdomain systemd[1]: Unit httpd.service entered failed state. Aug 02 21:16:41 instack.localdomain systemd[1]: httpd.service failed.
Created attachment 1187096 [details] httpd logs
We've been able to possibly workaround that problem, which seem to be caused by tuskar leftovers, that are preventing to restart httpd and therefore causing the undercloud upgrade to fail. the steps we took to workaround (and are not yet an Official workaround): before upgrading the undercloud from 8.0 to 9.0 , run: (1) yum remove tuskar* (2) removing all files from the following list: https://paste.fedoraproject.org/400796/70242832/
added some doctext. PLEASE NOTE: I did not verify this workaround. In particular I am not clear if the manual removal of the pyc files is absolutely necessary as included in the doctext, I went on comment #7 will try and catch omri later @omri needinfo on the pyc files. were they not removed with the yum removal of the tuskar packages?
(In reply to marios from comment #9) > added some doctext. PLEASE NOTE: I did not verify this workaround. In > particular I am not clear if the manual removal of the pyc files is > absolutely necessary as included in the doctext, I went on comment #7 will > try and catch omri later > > @omri needinfo on the pyc files. were they not removed with the yum removal > of the tuskar packages? I was trying to verify the suggested workaround, but I got stuck by this Bz : https://bugzilla.redhat.com/show_bug.cgi?id=1364583
*** Bug 1378968 has been marked as a duplicate of this bug. ***
Hi Jon, this one depends on https://bugzilla.redhat.com/show_bug.cgi?id=1455496, or am I missing something ?
moving POST is merged back to OSP 8 (see clones at https://bugzilla.redhat.com/show_bug.cgi?id=1455496#c1 )
all good(In reply to Sofer Athlan-Guyot from comment #17) > Hi Jon, > > this one depends on https://bugzilla.redhat.com/show_bug.cgi?id=1455496, or > am I missing something ? sorry for noise I was doing some cleanup of clone Depends on entries and pruned one too many.
According to our records, this should be resolved by instack-undercloud-4.0.0-17.el7ost. This build is available now.