Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1586358

Summary: OSP-13 undercloud deployment failed : Configuration option(s) ['use_tpool'] not supported
Product: Red Hat OpenStack Reporter: karan singh <karan>
Component: openstack-tripleoAssignee: James Slagle <jslagle>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Arik Chernetsky <achernet>
Severity: low Docs Contact:
Priority: low    
Version: 13.0 (Queens)CC: aschultz, dpeacock, john.s.strock, karan, mburns
Target Milestone: ---Keywords: Reopened, Triaged, ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-07-14 23:03:47 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Log file from .instack none

Description karan singh 2018-06-06 05:24:31 UTC
Description of problem:
While deploying OSP-13 undercloud, deployment failed with the following error


Version-Release number of selected component (if applicable):

[root@refarch-r220-04 ~]# rpm -qa | grep -i oslo
python2-oslo-config-5.2.0-1.el7ost.noarch
python2-oslo-utils-3.35.0-1.el7ost.noarch
python-oslo-middleware-lang-3.34.0-1.el7ost.noarch
python2-oslo-middleware-3.34.0-1.el7ost.noarch
python2-oslo-service-1.29.0-1.el7ost.noarch
python-oslo-cache-lang-1.28.0-1.el7ost.noarch
python-oslo-db-lang-4.33.0-2.el7ost.noarch
puppet-oslo-12.4.0-0.20180329043028.2259336.el7ost.noarch
python2-oslo-context-2.20.0-1.el7ost.noarch
python-oslo-i18n-lang-3.19.0-2.el7ost.noarch
python-oslo-vmware-lang-2.26.0-1.el7ost.noarch
python2-oslo-privsep-1.27.0-1.el7ost.noarch
python-oslo-versionedobjects-lang-1.31.2-1.el7ost.noarch
python-oslo-privsep-lang-1.27.0-1.el7ost.noarch
python-oslo-policy-lang-1.33.1-1.el7ost.noarch
python2-oslo-log-3.36.0-1.el7ost.noarch
python-oslo-concurrency-lang-3.25.0-1.el7ost.noarch
python2-oslo-reports-1.26.0-1.el7ost.noarch
python2-oslo-serialization-2.24.0-1.el7ost.noarch
python-oslo-log-lang-3.36.0-1.el7ost.noarch
python2-oslo-messaging-5.35.0-1.el7ost.noarch
python2-oslo-cache-1.28.0-1.el7ost.noarch
python2-oslo-vmware-2.26.0-1.el7ost.noarch
python2-oslo-db-4.33.0-2.el7ost.noarch
python2-oslo-rootwrap-5.13.0-1.el7ost.noarch
python2-oslo-i18n-3.19.0-2.el7ost.noarch
python2-oslo-versionedobjects-1.31.2-1.el7ost.noarch
python-oslo-utils-lang-3.35.0-1.el7ost.noarch
python2-oslo-concurrency-3.25.0-1.el7ost.noarch
python2-oslo-policy-1.33.1-1.el7ost.noarch
[root@refarch-r220-04 ~]#

How reproducible:
Always

Steps to Reproduce:
1. Deploy osp-13 undercloud from beta repos


Actual results:

2018-06-05 17:27:20,532 INFO: Notice: /Stage[main]/Neutron::Agents::Ml2::Networking_baremetal/Service[ironic-neutron-agent-service]/ensure: ensure changed 'stopped' to 'running'
2018-06-05 17:27:20,533 INFO: Notice: /Stage[main]/Neutron::Deps/Anchor[neutron::service::end]: Triggered 'refresh' from 6 events
2018-06-05 17:27:22,538 INFO: Notice: /Stage[main]/Ironic::Inspector/Service[ironic-inspector]/ensure: ensure changed 'stopped' to 'running'
2018-06-05 17:27:24,767 INFO: Notice: /Stage[main]/Ironic::Inspector/Service[ironic-inspector-dnsmasq]/ensure: ensure changed 'stopped' to 'running'
2018-06-05 17:27:24,768 INFO: Notice: /Stage[main]/Ironic::Deps/Anchor[ironic-inspector::service::end]: Triggered 'refresh' from 1 events
2018-06-05 17:28:04,539 INFO: Notice: /Stage[main]/Mistral::Db::Sync/Exec[mistral-db-populate]/returns: executed successfully
2018-06-05 17:28:39,747 INFO: Notice: /Stage[main]/Mistral::Db::Sync/Exec[mistral-db-populate]: Triggered 'refresh' from 5 events
2018-06-05 17:28:39,748 INFO: Notice: /Stage[main]/Mistral::Deps/Anchor[mistral::dbsync::end]: Triggered 'refresh' from 4 events
2018-06-05 17:28:39,749 INFO: Notice: /Stage[main]/Mistral::Deps/Anchor[mistral::service::begin]: Triggered 'refresh' from 3 events
2018-06-05 17:28:41,792 INFO: Notice: /Stage[main]/Mistral::Api/Service[mistral-api]/ensure: ensure changed 'stopped' to 'running'
2018-06-05 17:28:44,084 INFO: Notice: /Stage[main]/Mistral::Engine/Service[mistral-engine]/ensure: ensure changed 'stopped' to 'running'
2018-06-05 17:28:46,629 INFO: Notice: /Stage[main]/Mistral::Executor/Service[mistral-executor]/ensure: ensure changed 'stopped' to 'running'
2018-06-05 17:28:46,629 INFO: Notice: /Stage[main]/Mistral::Deps/Anchor[mistral::service::end]: Triggered 'refresh' from 3 events
2018-06-05 17:29:10,339 INFO: Error: Systemd start for openstack-nova-compute failed!
2018-06-05 17:29:10,339 INFO: journalctl log for openstack-nova-compute:
2018-06-05 17:29:10,339 INFO: -- Logs begin at Sat 2018-05-12 14:54:06 EDT, end at Tue 2018-06-05 17:29:09 EDT. --
2018-06-05 17:29:10,339 INFO: Jun 05 17:28:48 refarch-r220-04.front.sepia.ceph.com systemd[1]: Starting OpenStack Nova Compute Server...
2018-06-05 17:29:10,340 INFO: Jun 05 17:28:54 refarch-r220-04.front.sepia.ceph.com nova-compute[10773]: /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: N
otSupportedWarning: Configuration option(s) ['use_tpool'] not supported
2018-06-05 17:29:10,340 INFO: Jun 05 17:28:54 refarch-r220-04.front.sepia.ceph.com nova-compute[10773]: exception.NotSupportedWarning
2018-06-05 17:29:10,340 INFO: Jun 05 17:29:03 refarch-r220-04.front.sepia.ceph.com systemd[1]: openstack-nova-compute.service: main process exited, code=exited, status=1/FAILURE
2018-06-05 17:29:10,340 INFO: Jun 05 17:29:03 refarch-r220-04.front.sepia.ceph.com systemd[1]: Failed to start OpenStack Nova Compute Server.
2018-06-05 17:29:10,340 INFO: Jun 05 17:29:03 refarch-r220-04.front.sepia.ceph.com systemd[1]: Unit openstack-nova-compute.service entered failed state.
2018-06-05 17:29:10,340 INFO: Jun 05 17:29:03 refarch-r220-04.front.sepia.ceph.com systemd[1]: openstack-nova-compute.service failed.
2018-06-05 17:29:10,341 INFO: Jun 05 17:29:03 refarch-r220-04.front.sepia.ceph.com systemd[1]: openstack-nova-compute.service holdoff time over, scheduling restart.
2018-06-05 17:29:10,341 INFO: Jun 05 17:29:03 refarch-r220-04.front.sepia.ceph.com systemd[1]: Starting OpenStack Nova Compute Server...
2018-06-05 17:29:10,341 INFO: Jun 05 17:29:09 refarch-r220-04.front.sepia.ceph.com nova-compute[11032]: /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: N
otSupportedWarning: Configuration option(s) ['use_tpool'] not supported
2018-06-05 17:29:10,341 INFO: Jun 05 17:29:09 refarch-r220-04.front.sepia.ceph.com nova-compute[11032]: exception.NotSupportedWarning
2018-06-05 17:29:10,341 INFO:
2018-06-05 17:29:10,341 INFO: Error: /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Service[nova-compute]/ensure: change from stopped to running failed: Systemd start
for openstack-nova-compute failed!
2018-06-05 17:29:10,342 INFO: journalctl log for openstack-nova-compute:
2018-06-05 17:29:10,342 INFO: -- Logs begin at Sat 2018-05-12 14:54:06 EDT, end at Tue 2018-06-05 17:29:09 EDT. --
2018-06-05 17:29:10,342 INFO: Jun 05 17:28:48 refarch-r220-04.front.sepia.ceph.com systemd[1]: Starting OpenStack Nova Compute Server...
2018-06-05 17:29:10,342 INFO: Jun 05 17:28:54 refarch-r220-04.front.sepia.ceph.com nova-compute[10773]: /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: N
otSupportedWarning: Configuration option(s) ['use_tpool'] not supported
2018-06-05 17:29:10,342 INFO: Jun 05 17:28:54 refarch-r220-04.front.sepia.ceph.com nova-compute[10773]: exception.NotSupportedWarning
2018-06-05 17:29:10,342 INFO: Jun 05 17:29:03 refarch-r220-04.front.sepia.ceph.com systemd[1]: openstack-nova-compute.service: main process exited, code=exited, status=1/FAILURE
2018-06-05 17:29:10,343 INFO: Jun 05 17:29:03 refarch-r220-04.front.sepia.ceph.com systemd[1]: Failed to start OpenStack Nova Compute Server.
2018-06-05 17:29:10,343 INFO: Jun 05 17:29:03 refarch-r220-04.front.sepia.ceph.com systemd[1]: Unit openstack-nova-compute.service entered failed state.
2018-06-05 17:29:10,343 INFO: Jun 05 17:29:03 refarch-r220-04.front.sepia.ceph.com systemd[1]: openstack-nova-compute.service failed.
2018-06-05 17:29:10,343 INFO: Jun 05 17:29:03 refarch-r220-04.front.sepia.ceph.com systemd[1]: openstack-nova-compute.service holdoff time over, scheduling restart.
2018-06-05 17:29:10,343 INFO: Jun 05 17:29:03 refarch-r220-04.front.sepia.ceph.com systemd[1]: Starting OpenStack Nova Compute Server...
2018-06-05 17:29:10,344 INFO: Jun 05 17:29:09 refarch-r220-04.front.sepia.ceph.com nova-compute[11032]: /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
2018-06-05 17:29:10,344 INFO: Jun 05 17:29:09 refarch-r220-04.front.sepia.ceph.com nova-compute[11032]: exception.NotSupportedWarning
2018-06-05 17:29:10,344 INFO:
2018-06-05 17:29:10,513 INFO: Notice: /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Service[nova-compute]: Triggered 'refresh' from 1 events
2018-06-05 17:29:10,513 INFO: Notice: /Stage[main]/Nova::Deps/Anchor[nova::service::end]: Dependency Service[nova-compute] has failures: true
2018-06-05 17:29:10,513 INFO: Warning: /Stage[main]/Nova::Deps/Anchor[nova::service::end]: Skipping because of failed dependencies
2018-06-05 17:29:10,513 INFO: Notice: /Stage[main]/Nova::Logging/File[/var/log/nova/nova-manage.log]: Dependency Service[nova-compute] has failures: true
2018-06-05 17:29:10,514 INFO: Warning: /Stage[main]/Nova::Logging/File[/var/log/nova/nova-manage.log]: Skipping because of failed dependencies
2018-06-05 17:29:10,514 INFO: Notice: /Stage[main]/Nova::Cell_v2::Discover_hosts/Exec[nova-cell_v2-discover_hosts]: Dependency Service[nova-compute] has failures: true
2018-06-05 17:29:10,514 INFO: Warning: /Stage[main]/Nova::Cell_v2::Discover_hosts/Exec[nova-cell_v2-discover_hosts]: Skipping because of failed dependencies
2018-06-05 17:29:10,514 INFO: Notice: /Stage[main]/Nova/Exec[networking-refresh]: Dependency Service[nova-compute] has failures: true
2018-06-05 17:29:10,514 INFO: Warning: /Stage[main]/Nova/Exec[networking-refresh]: Skipping because of failed dependencies
2018-06-05 17:29:13,000 INFO: Notice: /Stage[main]/Swift::Storage::Account/Swift::Service[swift-account-reaper]/Service[swift-account-reaper]/ensure: ensure changed 'stopped' to 'running'

.....
.....
.....

2018-06-05 17:30:57,486 INFO: [2018-06-05 17:30:57,478] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1]
2018-06-05 17:30:57,487 INFO:
2018-06-05 17:30:57,487 INFO: [2018-06-05 17:30:57,479] (os-refresh-config) [ERROR] Aborting...
2018-06-05 17:30:57,498 DEBUG: An exception occurred
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 2330, in install
    _run_orc(instack_env)
  File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 1597, in _run_orc
    _run_live_command(args, instack_env, 'os-refresh-config')
  File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 676, in _run_live_command
    raise RuntimeError('%s failed. See log for details.' % name)
RuntimeError: os-refresh-config failed. See log for details.
2018-06-05 17:30:57,499 ERROR:
#############################################################################
Undercloud install failed.

Reason: os-refresh-config failed. See log for details.

Expected results:

Undercloud deployment should successful

Additional info:
On doing a standard google search, looks like oslo.db versions less than 4.34 will cause this issue and OSP-13 comes with oslo 4.34. Should we update oslo to 4.35 ? thoughts 

see
https://bugs.launchpad.net/nova/+bug/1746530

Comment 1 James Slagle 2018-06-11 20:32:37 UTC
please provide the full undercloud installation log from ~/.instack

Comment 2 Alex Schultz 2018-06-25 14:26:45 UTC
Closing due to lack of logs. Please reopen if you hit this again or have additional information to provide.

Comment 3 John 2018-07-14 00:59:00 UTC
Created attachment 1458814 [details]
Log file from .instack

Comment 4 John 2018-07-14 00:59:47 UTC
I've just attached the log file from .instack.  I'm getting the same error from a fresh deployment.

Comment 5 Alex Schultz 2018-07-14 23:03:47 UTC
New report is not the same as the original bug.  John yours failed due to selinux.

2018-07-13 17:45:12,724 INFO: [mNotice: /Stage[main]/Tripleo::Selinux/Exec[/sbin/setenforce 1]/returns: /sbin/setenforce: SELinux is disabled[0m
2018-07-13 17:45:12,728 INFO: [1;31mError: /sbin/setenforce 1 returned 1 instead of one of [0][0m
2018-07-13 17:45:12,730 INFO: [1;31mError: /Stage[main]/Tripleo::Selinux/Exec[/sbin/setenforce 1]/returns: change from notrun to 0 failed: /sbin/setenforce 1 returned 1 instead of one of [0][0