Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1455363 Details for
Bug 1596308
Placement API unexpected error: 'MIMEAccept' object has no attribute 'acceptable_offers'
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
nova-compute.log
nova-compute.log (text/plain), 2.24 MB, created by
Filip Hubík
on 2018-06-28 15:04:10 UTC
(
hide
)
Description:
nova-compute.log
Filename:
MIME Type:
Creator:
Filip Hubík
Created:
2018-06-28 15:04:10 UTC
Size:
2.24 MB
patch
obsolete
>2018-06-28 09:22:10.479 7230 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python2.7/site-packages/os_vif/__init__.py:46 >2018-06-28 09:22:10.480 7230 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python2.7/site-packages/os_vif/__init__.py:46 >2018-06-28 09:22:10.480 7230 INFO os_vif [-] Loaded VIF plugins: ovs, linux_bridge >2018-06-28 09:22:10.498 7230 INFO oslo_service.periodic_task [-] Skipping periodic task _sync_power_states because its interval is negative >2018-06-28 09:22:10.601 7230 INFO nova.virt.driver [-] Loading compute driver 'ironic.IronicDriver' >2018-06-28 09:22:10.629 7230 WARNING oslo_config.cfg [-] Option "firewall_driver" from group "DEFAULT" is deprecated for removal ( >nova-network is deprecated, as are any related configuration options. >). Its value may be silently ignored in the future. >2018-06-28 09:22:10.678 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Acquired semaphore "singleton_lock" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-06-28 09:22:10.678 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Releasing semaphore "singleton_lock" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-06-28 09:22:10.679 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python2.7/site-packages/oslo_service/service.py:366 >2018-06-28 09:22:10.680 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ******************************************************************************** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2890 >2018-06-28 09:22:10.680 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2891 >2018-06-28 09:22:10.680 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] command line args: [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2892 >2018-06-28 09:22:10.681 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] config files: ['/usr/share/nova/nova-dist.conf', '/etc/nova/nova.conf'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2894 >2018-06-28 09:22:10.681 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ================================================================================ log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2895 >2018-06-28 09:22:10.681 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.681 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] allow_same_net_traffic = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.682 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] auto_assign_floating_ip = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.682 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] backdoor_port = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.682 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] backdoor_socket = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.683 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] bandwidth_poll_interval = 600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.683 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] bindir = /usr/local/bin log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.683 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.684 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.684 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cert = self.pem log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.684 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cnt_vpn_clients = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.685 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] compute_driver = ironic.IronicDriver log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.685 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] compute_monitors = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.685 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] config_dir = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.685 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.686 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] config_file = ['/usr/share/nova/nova-dist.conf', '/etc/nova/nova.conf'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.686 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] console_host = undercloud-0.redhat.local log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.686 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] control_exchange = nova log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.687 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cpu_allocation_ratio = 0.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.687 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] create_unique_mac_address_attempts = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.687 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] daemon = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.687 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] debug = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.688 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.688 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.688 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.688 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] default_flavor = m1.small log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.689 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] default_floating_pool = nova log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.689 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.689 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.690 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] defer_iptables_apply = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.690 7230 WARNING oslo_config.cfg [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Option "dhcp_domain" from group "DEFAULT" is deprecated for removal ( >nova-network is deprecated, as are any related configuration options. >). Its value may be silently ignored in the future. >2018-06-28 09:22:10.690 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] dhcp_domain = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.690 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] dhcp_lease_time = 86400 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.691 7230 WARNING oslo_config.cfg [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Option "dhcpbridge" from group "DEFAULT" is deprecated for removal ( >nova-network is deprecated, as are any related configuration options. >). Its value may be silently ignored in the future. >2018-06-28 09:22:10.691 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] dhcpbridge = /usr/bin/nova-dhcpbridge log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.691 7230 WARNING oslo_config.cfg [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Option "dhcpbridge_flagfile" from group "DEFAULT" is deprecated for removal ( >nova-network is deprecated, as are any related configuration options. >). Its value may be silently ignored in the future. >2018-06-28 09:22:10.692 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] dhcpbridge_flagfile = ['/usr/share/nova/nova-dist.conf', '/etc/nova/nova.conf'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.692 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] disk_allocation_ratio = 0.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.692 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] dmz_cidr = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.692 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] dns_server = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.693 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] dns_update_periodic_interval = -1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.693 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] dnsmasq_config_file = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.693 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ebtables_exec_attempts = 3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.694 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ebtables_retry_interval = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.694 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] enable_network_quota = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.694 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] enable_new_services = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.695 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] enabled_apis = ['metadata'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.695 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.695 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] fake_network = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.695 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] firewall_driver = nova.virt.firewall.NoopFirewallDriver log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.696 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] fixed_ip_disassociate_timeout = 600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.696 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] fixed_range_v6 = fd00::/48 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.696 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] flat_injected = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.696 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] flat_interface = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.697 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] flat_network_bridge = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.697 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] flat_network_dns = 8.8.4.4 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.697 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] floating_ip_dns_manager = nova.network.noop_dns_driver.NoopDNSDriver log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.698 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] force_config_drive = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.698 7230 WARNING oslo_config.cfg [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Option "force_dhcp_release" from group "DEFAULT" is deprecated for removal ( >nova-network is deprecated, as are any related configuration options. >). Its value may be silently ignored in the future. >2018-06-28 09:22:10.698 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] force_dhcp_release = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.698 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] force_raw_images = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.699 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] force_snat_range = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.699 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] forward_bridge_interface = ['all'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.699 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] gateway = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.699 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] gateway_v6 = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.700 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.700 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.700 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] host = undercloud-0.redhat.local log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.701 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] image_cache_manager_interval = 2400 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.701 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] image_cache_subdirectory_name = _base log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.701 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] injected_network_template = /usr/share/nova/interfaces.template log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.701 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.702 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.702 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] instance_dns_domain = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.702 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] instance_dns_manager = nova.network.noop_dns_driver.NoopDNSDriver log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.703 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.703 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.703 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] instance_usage_audit = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.703 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] instance_usage_audit_period = hour log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.704 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.704 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.704 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.705 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] iptables_bottom_regex = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.705 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] iptables_drop_action = DROP log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.705 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] iptables_top_regex = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.705 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ipv6_backend = rfc2462 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.706 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] key = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.706 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] l3_lib = nova.network.l3.LinuxNetL3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.706 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ldap_dns_base_dn = ou=hosts,dc=example,dc=org log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.707 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ldap_dns_password = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.707 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ldap_dns_servers = ['dns.example.org'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.707 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ldap_dns_soa_expiry = 86400 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.707 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ldap_dns_soa_hostmaster = hostmaster@example.org log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.708 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ldap_dns_soa_minimum = 7200 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.708 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ldap_dns_soa_refresh = 1800 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.708 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ldap_dns_soa_retry = 3600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.709 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ldap_dns_url = ldap://ldap.example.com:389 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.709 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ldap_dns_user = uid=admin,ou=people,dc=example,dc=org log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.709 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] linuxnet_interface_driver = nova.network.linux_net.LinuxBridgeInterfaceDriver log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.710 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] linuxnet_ovs_integration_bridge = br-int log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.710 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.710 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] log_config_append = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.710 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.711 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] log_dir = /var/log/nova log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.711 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] log_file = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.711 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] log_options = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.711 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.712 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.712 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.712 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.713 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.713 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.713 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.713 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.714 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.714 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.714 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] metadata_host = 172.16.0.4 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.715 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] metadata_listen = 192.168.24.1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.715 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.715 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] metadata_port = 8775 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.715 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] metadata_workers = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.716 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.716 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] mkisofs_cmd = genisoimage log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.716 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] multi_host = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.717 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] my_block_storage_ip = 172.16.0.4 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.717 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] my_ip = 172.16.0.4 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.717 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.718 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] network_driver = nova.network.linux_net log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.718 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] network_manager = nova.network.manager.VlanManager log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.718 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] network_size = 256 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.718 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] networks_path = /var/lib/nova/networks log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.719 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent', 'img_signature_hash_method', 'img_signature', 'img_signature_key_type', 'img_signature_certificate_uuid'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.719 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] num_networks = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.719 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] osapi_compute_listen = 192.168.24.1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.720 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.720 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.720 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] osapi_compute_workers = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.721 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.721 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] password_length = 12 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.721 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] periodic_enable = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.721 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.722 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.722 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] preallocate_images = none log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.722 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] public_interface = eth0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.722 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] publish_errors = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.723 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] pybasedir = /usr/lib/python2.7/site-packages log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.723 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota_networks = 3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.724 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.725 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.725 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.725 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.725 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.726 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.726 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] record = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.726 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] remove_unused_base_images = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.726 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.727 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] report_interval = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.727 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.727 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.728 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.728 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] reserved_host_memory_mb = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.728 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.728 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.729 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.729 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.729 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.730 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] routing_source_ip = 172.16.0.4 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.730 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] rpc_backend = rabbit log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.730 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] rpc_response_timeout = 600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.730 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.731 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.731 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.731 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.732 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.732 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] send_arp_for_ha = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.732 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] send_arp_for_ha_count = 3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.732 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] service_down_time = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.733 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.733 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] share_dhcp_address = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.733 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.733 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.734 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.734 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.734 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ssl_only = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.735 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.735 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] sync_power_state_interval = -1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.735 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.735 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.736 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] teardown_unused_network_gateway = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.736 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] tempdir = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.736 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.736 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] transport_url = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.737 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] update_dns_entries = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.737 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.737 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] use_cow_images = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.738 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] use_ipv6 = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.738 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] use_journal = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.738 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] use_json = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.738 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] use_network_dns_servers = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.739 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] use_neutron = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.739 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.739 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] use_single_default_gateway = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.739 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] use_stderr = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.740 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] use_syslog = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.740 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.740 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.741 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.741 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.741 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vlan_interface = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.741 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vlan_start = 100 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.742 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.742 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vpn_ip = 172.16.0.4 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.742 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vpn_start = 1000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.743 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] watch_log_file = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.743 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:22:10.743 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.743 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.744 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.744 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.744 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.745 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.745 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.745 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.745 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.746 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.746 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.746 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.747 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.747 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.747 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vif_plug_ovs_privileged.capabilities = [12] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.747 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.748 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.748 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.748 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] powervm.disk_driver = localdisk log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.748 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] powervm.proc_units_factor = 0.1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.749 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] powervm.volume_group_name = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.749 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.749 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.750 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.750 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.750 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.751 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.751 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api.hide_server_address_states = ['building'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.751 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.751 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.752 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.752 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.752 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.752 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.753 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.753 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.753 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.754 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.754 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.754 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.755 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.755 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.755 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.755 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.available_filters = ['tripleo_common.filters.list.tripleo_filters'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.756 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.756 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.756 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.757 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.enabled_filters = ['RetryFilter', 'TripleOCapabilitiesFilter', 'ComputeCapabilitiesFilter', 'AvailabilityZoneFilter', 'RamFilter', 'DiskFilter', 'ComputeFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.757 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.757 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.757 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.758 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.758 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.758 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.759 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.759 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.759 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.760 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.760 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.760 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.760 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.761 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.761 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.761 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.762 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.762 7230 WARNING oslo_config.cfg [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Option "api_endpoint" from group "ironic" is deprecated for removal (Endpoint lookup uses the service catalog via common keystoneauth1 Adapter configuration options. In the current release, api_endpoint will override this behavior, but will be ignored and/or removed in a future release. To achieve the same result, use the endpoint_override option instead.). Its value may be silently ignored in the future. >2018-06-28 09:22:10.762 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.api_endpoint = https://192.168.24.2:13385/v1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.763 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.763 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.763 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.763 7230 WARNING oslo_config.cfg [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Option "auth_plugin" from group "ironic" is deprecated. Use option "auth_type" from group "ironic". >2018-06-28 09:22:10.764 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.auth_type = password log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.764 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.cafile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.764 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.certfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.765 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.765 7230 WARNING oslo_config.cfg [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Option "api_endpoint" from group "ironic" is deprecated. Use option "endpoint-override" from group "ironic". >2018-06-28 09:22:10.765 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.endpoint_override = https://192.168.24.2:13385/v1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.765 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.insecure = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.766 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.766 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.max_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.766 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.min_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.767 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.region_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.767 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.767 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.service_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.768 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.768 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.768 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.768 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.769 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ironic.version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.769 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.769 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.770 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.770 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] barbican.barbican_endpoint_type = public log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.770 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.770 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.771 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.771 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.allowed_direct_url_schemes = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.771 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.api_servers = ['http://192.168.24.3:9292'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.772 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.cafile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.772 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.certfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.772 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.772 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.debug = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.773 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.773 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.773 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.774 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.insecure = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.774 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.keyfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.774 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.max_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.774 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.min_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.775 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.num_retries = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.775 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.region_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.775 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.service_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.775 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.service_type = image log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.776 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.776 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.776 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.777 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.777 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] glance.version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.777 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.777 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.778 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.778 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.778 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.779 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.779 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.779 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.779 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.780 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.780 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.host_username = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.780 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.insecure = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.781 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.781 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.781 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.781 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.782 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.782 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.782 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.782 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.783 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.783 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.783 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.vlan_interface = vmnic0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.784 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.784 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.784 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.784 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.785 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.786 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.786 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.786 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.fake_rabbit = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.786 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.787 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.787 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.787 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.788 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.788 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.788 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.788 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.rabbit_host = localhost log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.789 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.rabbit_hosts = ['localhost:5672'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.789 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.789 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.790 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.rabbit_max_retries = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.790 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.rabbit_password = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.790 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.rabbit_port = 5672 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.790 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.791 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.791 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.791 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.792 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.rabbit_userid = guest log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.792 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.rabbit_virtual_host = / log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.792 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.792 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.793 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.793 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.793 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.794 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.794 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.794 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xvp.console_xvp_conf = /etc/xvp.conf log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.795 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xvp.console_xvp_conf_template = /usr/lib/python2.7/site-packages/nova/console/xvp.conf.template log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.795 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xvp.console_xvp_log = /var/log/xvp.log log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.795 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xvp.console_xvp_multiplex_port = 5900 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.795 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xvp.console_xvp_pid = /var/run/xvp.pid log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.796 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.796 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.796 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.797 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.797 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.797 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.797 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.798 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.798 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.798 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cache.backend = dogpile.cache.null log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.798 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.799 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.799 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.799 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cache.enabled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.800 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.800 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.800 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.800 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.801 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.801 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.801 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cache.memcache_socket_timeout = 3.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.802 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cache.proxies = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.802 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.802 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.802 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.803 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.agent_path = usr/sbin/xe-update-networking log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.803 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.agent_resetnetwork_timeout = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.803 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.agent_timeout = 30 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.803 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.agent_version_timeout = 300 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.804 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.block_device_creation_timeout = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.804 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.cache_images = all log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.804 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.check_host = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.805 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.connection_concurrent = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.805 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.connection_password = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.805 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.connection_url = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.805 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.connection_username = root log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.806 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.console_public_hostname = undercloud-0.redhat.local log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.806 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.default_os_type = linux log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.806 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.disable_agent = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.807 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.image_compression_level = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.807 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.image_handler = direct_vhd log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.807 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.image_upload_handler = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.807 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.independent_compute = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.808 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.introduce_vdi_retry_wait = 20 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.808 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.ipxe_boot_menu_url = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.808 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.ipxe_mkisofs_cmd = mkisofs log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.809 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.ipxe_network_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.809 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.login_timeout = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.809 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.max_kernel_ramdisk_size = 16777216 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.809 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.num_vbd_unplug_retries = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.810 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.ovs_integration_bridge = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.810 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.running_timeout = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.810 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.sparse_copy = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.810 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.sr_base_path = /var/run/sr-mount log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.811 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.sr_matching_filter = default-sr:true log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.811 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.target_host = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.811 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.target_port = 3260 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.812 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.use_agent_default = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.812 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.use_join_force = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.812 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.vhd_coalesce_max_attempts = 20 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.813 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.vhd_coalesce_poll_interval = 5.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.813 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] xenserver.vif_driver = nova.virt.xenapi.vif.XenAPIOpenVswitchDriver log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.813 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.814 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.814 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.814 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.814 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] pci.alias = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.815 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] pci.passthrough_whitelist = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.815 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] mks.enabled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.815 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.816 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement_database.connection = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.816 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement_database.connection_debug = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.816 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement_database.connection_parameters = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.817 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.817 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement_database.connection_trace = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.817 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement_database.max_overflow = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.818 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement_database.max_pool_size = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.818 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement_database.max_retries = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.818 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.818 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement_database.pool_timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.819 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement_database.retry_interval = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.819 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement_database.slave_connection = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.819 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement_database.sqlite_synchronous = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.820 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] keystone.cafile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.820 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] keystone.certfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.820 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.820 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.821 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] keystone.insecure = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.821 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.821 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] keystone.max_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.822 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] keystone.min_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.822 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] keystone.region_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.822 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] keystone.service_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.822 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.823 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.823 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] keystone.timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.823 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.823 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] keystone.version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.824 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.824 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.824 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.auth_type = v3password log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.825 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.cafile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.825 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.certfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.825 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.826 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.826 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.826 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.826 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.insecure = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.827 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.827 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.max_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.827 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.828 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.min_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.828 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.828 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.region_name = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.828 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.service_metadata_proxy = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.829 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.service_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.829 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.service_type = network log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.829 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.830 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.timeout = 30 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.830 7230 WARNING oslo_config.cfg [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Option "url" from group "neutron" is deprecated for removal (Endpoint lookup uses the service catalog via common keystoneauth1 Adapter configuration options. In the current release, "url" will override this behavior, but will be ignored and/or removed in a future release. To achieve the same result, use the endpoint_override option instead.). Its value may be silently ignored in the future. >2018-06-28 09:22:10.830 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.url = https://192.168.24.2:13696 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.830 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.831 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] neutron.version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.831 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.831 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.832 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.832 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.832 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.832 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vnc.enabled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.833 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vnc.keymap = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.833 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vnc.novncproxy_base_url = http://127.0.0.1:6080/vnc_auto.html log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.833 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.834 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.834 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vnc.server_listen = 127.0.0.1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.834 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vnc.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.835 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.835 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.835 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.835 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vnc.xvpvncproxy_base_url = http://127.0.0.1:6081/console log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.836 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vnc.xvpvncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.836 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vnc.xvpvncproxy_port = 6081 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.836 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] conductor.workers = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.837 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_notifications.driver = ['messaging'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.837 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.837 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.838 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.838 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.838 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.839 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.839 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.839 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.839 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.840 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota.cores = 20 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.840 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.840 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota.fixed_ips = -1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.841 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota.floating_ips = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.841 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.841 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.841 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.842 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota.instances = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.842 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.842 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota.max_age = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.843 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.843 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.843 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.843 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota.reservation_expire = 86400 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.844 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota.security_group_rules = 20 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.844 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota.security_groups = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.844 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.844 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.845 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] quota.until_refresh = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.845 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.checksum_base_images = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.845 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.checksum_interval_seconds = 3600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.846 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.846 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.cpu_mode = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.846 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.cpu_model = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.846 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.847 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.847 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.847 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.848 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.848 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.848 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.848 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.hw_machine_type = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.849 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.image_info_filename_pattern = /var/lib/nova/instances/_base/%(image)s.info log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.849 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.images_rbd_ceph_conf = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.850 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.images_rbd_pool = rbd log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.850 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.images_type = default log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.850 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.850 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.851 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.851 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.851 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.852 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.852 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.852 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.852 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.853 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.853 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.853 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.854 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.live_migration_permit_auto_converge = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.854 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.live_migration_permit_post_copy = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.854 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.live_migration_progress_timeout = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.854 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.855 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.855 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.live_migration_uri = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.855 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.855 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.856 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.856 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.856 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.857 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.num_nvme_discover_tries = 3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.857 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.num_pcie_ports = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.857 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.858 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.858 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.858 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.rbd_secret_uuid = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.858 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.rbd_user = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.859 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.859 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.859 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.860 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.860 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.860 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.860 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.rng_dev_path = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.861 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.rx_queue_size = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.861 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.861 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.862 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.862 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.862 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.862 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.863 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.sysinfo_serial = auto log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.863 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.tx_queue_size = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.863 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.864 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.use_usb_tablet = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.864 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.864 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.864 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.865 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.865 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.volume_use_multipath = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.865 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.866 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.866 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.866 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.866 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.867 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.867 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.867 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.868 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] libvirt.xen_hvmloader_path = /usr/lib/xen/boot/hvmloader log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.868 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] metrics.required = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.868 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.868 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.869 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.869 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.869 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.870 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.870 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] notifications.notify_on_state_change = vm_and_task_state log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.870 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.870 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.871 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.871 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.871 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] scheduler.driver = filter_scheduler log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.872 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.872 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] scheduler.max_attempts = 30 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.872 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.872 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] scheduler.periodic_task_interval = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.873 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.873 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] scheduler.query_placement_for_availability_zone = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.873 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] scheduler.workers = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.874 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.874 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.874 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.875 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.875 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.auth_section = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.875 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.auth_type = password log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.875 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.cafile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.876 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.certfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.876 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.876 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.877 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.incomplete_consumer_project_id = 00000000-0000-0000-0000-0000000000000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.877 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.incomplete_consumer_user_id = 00000000-0000-0000-0000-0000000000000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.877 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.insecure = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.877 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.keyfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.878 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.max_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.878 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.min_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.878 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.policy_file = placement-policy.yaml log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.878 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.randomize_allocation_candidates = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.879 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.879 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.service_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.879 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.service_type = placement log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.880 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.880 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.880 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.880 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] placement.version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.881 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] remote_debug.host = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.881 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] remote_debug.port = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.881 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.882 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.882 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.882 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.882 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.883 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.883 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.883 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.884 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.884 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] spice.enabled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.884 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.885 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.885 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.885 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] spice.keymap = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.886 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.886 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.886 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.886 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] service_user.auth_type = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.887 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] service_user.cafile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.887 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] service_user.certfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.887 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.888 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] service_user.insecure = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.888 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.888 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] service_user.send_service_user_token = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.888 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.889 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] service_user.timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.889 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.889 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.889 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.890 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.890 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.890 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.891 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.891 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.891 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.891 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.892 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.892 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.892 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.893 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.893 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.893 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.893 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.894 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.894 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] compute.consecutive_build_service_disable_threshold = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.894 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.895 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] compute.live_migration_wait_for_vif_plug = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.895 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.895 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.896 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] rdp.enabled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.896 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.896 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] guestfs.debug = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.897 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.897 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.connection = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.897 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.897 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.connection_parameters = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.898 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.898 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.connection_trace = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.898 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.899 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.899 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.899 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.899 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.900 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.900 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.max_retries = -1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.900 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.min_pool_size = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.901 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.901 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.901 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.902 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.902 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.902 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.902 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.use_db_reconnect = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.903 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] database.use_tpool = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.903 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.903 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.904 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.904 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] workarounds.enable_consoleauth = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.904 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.904 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.bandwidth_update_interval = 600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.905 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.call_timeout = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.905 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.capabilities = ['hypervisor=xenserver;kvm', 'os=linux;windows'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.906 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.cell_type = compute log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.906 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.cells_config = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.906 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.db_check_interval = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.906 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.enable = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.907 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.instance_update_num_instances = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.907 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.instance_update_sync_database_limit = 100 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.907 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.instance_updated_at_threshold = 3600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.908 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.max_hop_count = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.908 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.mute_child_interval = 300 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.908 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.mute_weight_multiplier = -10000.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.908 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.name = nova log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.909 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.offset_weight_multiplier = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.909 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.ram_weight_multiplier = 10.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.909 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.reserve_percent = 10.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.910 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.rpc_driver_queue_base = cells.intercell log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.910 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.scheduler = nova.cells.scheduler.CellsScheduler log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.910 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.scheduler_filter_classes = ['nova.cells.filters.all_filters'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.910 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.scheduler_retries = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.911 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.scheduler_retry_delay = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.911 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cells.scheduler_weight_classes = ['nova.cells.weights.all_weighers'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.911 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api_database.connection = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.912 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.912 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.912 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.912 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.913 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api_database.max_overflow = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.913 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api_database.max_pool_size = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.913 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.913 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.914 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.914 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.914 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.915 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.915 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] devices.enabled_vgpu_types = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.915 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] profiler.connection_string = messaging:// log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.915 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] profiler.enabled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.916 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] profiler.es_doc_type = notification log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.916 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.916 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.917 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] profiler.filter_error_trace = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.917 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.917 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.917 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.918 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.918 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.918 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.919 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.919 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.919 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.919 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cinder.auth_type = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.920 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cinder.cafile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.920 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cinder.catalog_info = volumev3:cinderv3:publicURL log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.920 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cinder.certfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.921 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.921 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.921 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.921 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.922 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cinder.insecure = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.922 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.922 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.923 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.923 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] cinder.timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.923 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.923 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] upgrade_levels.cells = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.924 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.924 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] upgrade_levels.compute = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.924 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.924 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] upgrade_levels.console = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.925 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] upgrade_levels.consoleauth = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.925 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] upgrade_levels.intercell = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.925 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] upgrade_levels.network = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.926 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.926 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] key_manager.backend = nova.keymgr.conf_key_mgr.ConfKeyManager log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.926 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] key_manager.fixed_key = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.926 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] osapi_v21.project_id_regex = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:22:10.927 7230 DEBUG oslo_service.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] ******************************************************************************** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2914 >2018-06-28 09:22:10.928 7230 INFO nova.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting compute node (version 18.0.0-0.20180625215857.9a8a98b.el7ost) >2018-06-28 09:22:11.026 7230 WARNING nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] No compute node record found for host undercloud-0.redhat.local. If this is the first time this service is starting on this host, then you can ignore this warning.: ComputeHostNotFound_Remote: Compute host undercloud-0.redhat.local could not be found. >2018-06-28 09:22:11.727 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 0 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:22:11.728 7230 DEBUG nova.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Creating RPC server for service compute start /usr/lib/python2.7/site-packages/nova/service.py:185 >2018-06-28 09:22:11.750 7230 DEBUG nova.service [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python2.7/site-packages/nova/service.py:203 >2018-06-28 09:22:11.750 7230 DEBUG nova.servicegroup.drivers.db [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] DB_Driver: join new ServiceGroup member undercloud-0.redhat.local to the compute group, service = <Service: host=undercloud-0.redhat.local, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py:47 >2018-06-28 09:23:10.497 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:23:10.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:23:10.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:23:10.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:23:10.528 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:23:10.530 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:23:10.563 7230 INFO nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running instance usage audit for host undercloud-0.redhat.local from 2018-06-28 12:00:00 to 2018-06-28 13:00:00. 0 instances. >2018-06-28 09:23:10.588 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:23:10.588 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:23:10.589 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:23:10.589 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:23:10.590 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:23:10.601 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] No compute node record for host undercloud-0.redhat.local: ComputeHostNotFound_Remote: Compute host undercloud-0.redhat.local could not be found. >2018-06-28 09:23:10.662 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 0 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:23:10.663 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:23:10.663 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:24:10.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:24:10.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:24:10.547 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:24:10.568 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:24:10.568 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:24:11.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:24:11.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:24:11.516 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] No compute node record for host undercloud-0.redhat.local: ComputeHostNotFound_Remote: Compute host undercloud-0.redhat.local could not be found. >2018-06-28 09:24:11.782 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 0 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:24:11.783 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:24:12.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:24:12.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:24:12.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:24:12.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:24:12.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:24:12.514 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:25:10.513 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:25:10.514 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:25:11.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:25:11.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:25:12.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:25:12.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:25:12.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:25:13.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:25:13.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:25:13.516 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] No compute node record for host undercloud-0.redhat.local: ComputeHostNotFound_Remote: Compute host undercloud-0.redhat.local could not be found. >2018-06-28 09:25:13.580 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 0 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:25:14.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:25:14.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:25:14.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:25:14.514 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:26:10.515 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:26:10.516 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:26:11.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:26:13.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:26:13.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:26:13.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:26:13.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:26:14.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:26:14.517 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:26:14.529 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] No compute node record for host undercloud-0.redhat.local: ComputeHostNotFound_Remote: Compute host undercloud-0.redhat.local could not be found. >2018-06-28 09:26:14.585 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 0 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:26:15.567 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:26:16.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:26:16.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:26:16.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:26:16.513 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:27:10.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:27:10.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:27:10.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:27:10.511 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:27:10.511 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7862 >2018-06-28 09:27:10.524 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7871 >2018-06-28 09:27:10.525 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:27:10.525 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7905 >2018-06-28 09:27:11.536 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:27:13.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:27:13.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:27:15.497 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:27:15.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:27:15.511 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:27:15.523 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] No compute node record for host undercloud-0.redhat.local: ComputeHostNotFound_Remote: Compute host undercloud-0.redhat.local could not be found. >2018-06-28 09:27:15.578 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 0 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:27:17.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:27:17.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:27:17.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:27:17.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:27:17.514 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:28:11.514 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:28:11.514 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:28:11.514 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:28:13.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:28:13.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:28:14.497 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:28:15.519 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:28:15.531 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] No compute node record for host undercloud-0.redhat.local: ComputeHostNotFound_Remote: Compute host undercloud-0.redhat.local could not be found. >2018-06-28 09:28:15.595 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 0 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:28:16.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:28:16.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:28:19.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:28:19.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:28:19.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:28:19.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:28:19.514 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:29:12.513 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:29:12.514 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:29:12.514 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:29:14.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:29:14.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:29:16.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:29:17.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:29:17.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:29:17.513 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] No compute node record for host undercloud-0.redhat.local: ComputeHostNotFound_Remote: Compute host undercloud-0.redhat.local could not be found. >2018-06-28 09:29:17.778 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 0 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:29:19.778 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:29:21.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:29:21.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:29:21.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:29:21.513 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:30:12.514 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:30:14.501 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:30:14.501 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:30:15.497 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:30:15.518 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:30:16.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:30:17.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:30:17.513 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:30:17.529 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] No compute node record for host undercloud-0.redhat.local: ComputeHostNotFound_Remote: Compute host undercloud-0.redhat.local could not be found. >2018-06-28 09:30:17.629 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 0 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:30:19.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:30:19.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:30:21.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:30:21.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:30:21.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:30:21.513 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:31:14.513 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:31:16.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:31:16.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:31:16.501 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:31:17.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:31:17.512 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] No compute node record for host undercloud-0.redhat.local: ComputeHostNotFound_Remote: Compute host undercloud-0.redhat.local could not be found. >2018-06-28 09:31:17.576 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 0 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:31:17.577 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:31:19.577 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:31:20.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:31:20.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:31:22.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:31:22.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:31:22.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:31:22.514 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:32:10.515 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_bandwidth_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:32:10.517 7230 INFO nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Updating bandwidth usage cache >2018-06-28 09:32:10.537 7230 INFO nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Bandwidth usage not supported by ironic.IronicDriver. >2018-06-28 09:32:11.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:32:11.510 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:32:11.510 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7905 >2018-06-28 09:32:16.509 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:32:16.510 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:32:16.510 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:32:16.511 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7862 >2018-06-28 09:32:16.527 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7871 >2018-06-28 09:32:17.516 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:32:18.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:32:18.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:32:19.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:32:19.514 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:32:19.526 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] No compute node record for host undercloud-0.redhat.local: ComputeHostNotFound_Remote: Compute host undercloud-0.redhat.local could not be found. >2018-06-28 09:32:21.021 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:32:21.023 7230 WARNING nova.compute.monitors [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Excluding nova.compute.monitors.cpu monitor virt_driver. Not in the list of enabled monitors (CONF.compute_monitors). >2018-06-28 09:32:21.024 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:32:21.024 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.00306701660156 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:32:21.024 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=0MB free_disk=0GB free_vcpus=unknown pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:32:21.025 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:32:21.038 7230 WARNING nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] No compute node record for undercloud-0.redhat.local:55611eb8-c4fa-4576-ae28-d2017563fdd0: ComputeHostNotFound_Remote: Compute host undercloud-0.redhat.local could not be found. >2018-06-28 09:32:21.064 7230 INFO nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Compute node record created for undercloud-0.redhat.local:55611eb8-c4fa-4576-ae28-d2017563fdd0 with uuid: e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:32:21.093 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "placement_client" acquired by "nova.scheduler.client.report._create_client" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:32:21.095 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "placement_client" released by "nova.scheduler.client.report._create_client" :: held 0.002s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:32:23.459 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-36332cff-d402-4e5d-8913-a20b55ee067e] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-36332cff-d402-4e5d-8913-a20b55ee067e", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:32:23.460 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 2.436s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 590, in _init_compute_node >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:32:23.461 7230 ERROR nova.compute.manager >2018-06-28 09:32:23.462 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:32:23.463 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 2.44177103043 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:32:23.463 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=0MB free_disk=0GB free_vcpus=unknown pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:32:23.463 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:32:23.475 7230 WARNING nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] No compute node record for undercloud-0.redhat.local:d56ae6cc-b350-42fd-b0ba-40b6bfa6af02: ComputeHostNotFound_Remote: Compute host undercloud-0.redhat.local could not be found. >2018-06-28 09:32:23.500 7230 INFO nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Compute node record created for undercloud-0.redhat.local:d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 with uuid: 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:32:25.382 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-e5a1714b-c31b-4133-9594-4fa595d08b73] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-e5a1714b-c31b-4133-9594-4fa595d08b73", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:32:25.383 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 1.920s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 590, in _init_compute_node >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:32:25.383 7230 ERROR nova.compute.manager >2018-06-28 09:32:25.384 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:32:25.384 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 4.3633441925 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:32:25.385 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=0MB free_disk=0GB free_vcpus=unknown pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:32:25.385 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:32:25.397 7230 WARNING nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] No compute node record for undercloud-0.redhat.local:c40592fd-6b81-4279-8496-8a3c5da28f52: ComputeHostNotFound_Remote: Compute host undercloud-0.redhat.local could not be found. >2018-06-28 09:32:25.418 7230 INFO nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Compute node record created for undercloud-0.redhat.local:c40592fd-6b81-4279-8496-8a3c5da28f52 with uuid: f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:32:25.440 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-c42961fe-7002-4f12-b00e-57329645fefb] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-c42961fe-7002-4f12-b00e-57329645fefb", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:32:25.440 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.055s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 590, in _init_compute_node >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:32:25.440 7230 ERROR nova.compute.manager >2018-06-28 09:32:25.441 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:32:25.441 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:32:25.442 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:32:25.442 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:32:25.452 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:32:26.437 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:32:26.450 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:33:16.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:33:16.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:33:18.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:33:18.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:33:18.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:33:20.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:33:20.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:33:20.595 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:33:20.595 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:33:20.595 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000813961029053 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:33:20.596 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=0MB free_disk=0GB free_vcpus=unknown pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:33:20.596 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:33:20.623 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-b6ea80ee-45cb-4ac9-8c94-1f5dc89b3e3d] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-b6ea80ee-45cb-4ac9-8c94-1f5dc89b3e3d", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:33:20.623 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.027s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:33:20.623 7230 ERROR nova.compute.manager >2018-06-28 09:33:20.624 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:33:20.624 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0296030044556 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:33:20.625 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=0MB free_disk=0GB free_vcpus=unknown pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:33:20.625 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:33:20.655 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-f0b72caa-8d84-45fc-9429-a918717a00b9] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-f0b72caa-8d84-45fc-9429-a918717a00b9", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:33:20.655 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.031s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:33:20.656 7230 ERROR nova.compute.manager >2018-06-28 09:33:20.656 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:33:20.656 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0618960857391 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:33:20.657 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=0MB free_disk=0GB free_vcpus=unknown pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:33:20.657 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:33:20.682 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-cf2183f7-d20f-4bad-a6df-7e733050b8bb] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-cf2183f7-d20f-4bad-a6df-7e733050b8bb", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:33:20.682 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.025s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:33:20.683 7230 ERROR nova.compute.manager >2018-06-28 09:33:22.683 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:33:22.694 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:33:24.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:33:24.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:33:24.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:33:24.510 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:34:17.511 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:34:18.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:34:18.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:34:19.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:34:19.515 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:34:19.515 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:34:21.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:34:21.803 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:34:21.803 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:34:21.803 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000748872756958 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:34:21.804 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=0MB free_disk=0GB free_vcpus=unknown pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:34:21.804 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:34:21.834 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-e4a97e15-a309-47c5-bfb2-88eeafcf7d0e] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-e4a97e15-a309-47c5-bfb2-88eeafcf7d0e", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:34:21.834 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.030s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:34:21.834 7230 ERROR nova.compute.manager >2018-06-28 09:34:21.835 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:34:21.836 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0330288410187 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:34:21.836 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=0MB free_disk=0GB free_vcpus=unknown pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:34:21.836 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:34:21.864 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-d1a41f6d-c78a-4025-b3ab-d4f2e1cf353b] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-d1a41f6d-c78a-4025-b3ab-d4f2e1cf353b", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:34:21.864 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.028s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:34:21.865 7230 ERROR nova.compute.manager >2018-06-28 09:34:21.865 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:34:21.866 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0628769397736 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:34:21.866 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=0MB free_disk=0GB free_vcpus=unknown pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:34:21.866 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:34:21.891 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-13303e47-2d18-4cee-91b9-a5956961b032] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-13303e47-2d18-4cee-91b9-a5956961b032", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:34:21.892 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.025s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:34:21.892 7230 ERROR nova.compute.manager >2018-06-28 09:34:22.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:34:23.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:34:24.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:34:25.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:34:25.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:34:25.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:34:25.513 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:35:18.514 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:35:19.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:35:19.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:35:20.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:35:20.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:35:23.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:35:23.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:35:23.594 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:35:23.595 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:35:23.595 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000635862350464 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:35:23.595 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:35:23.596 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:35:23.622 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-b1cd0213-5b5e-413c-867c-98cf0a049633] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-b1cd0213-5b5e-413c-867c-98cf0a049633", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:35:23.622 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.027s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:35:23.623 7230 ERROR nova.compute.manager >2018-06-28 09:35:23.624 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:35:23.624 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0296139717102 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:35:23.624 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:35:23.624 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:35:23.650 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-d6a95fec-639f-4278-8d27-685ab57fc0e3] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-d6a95fec-639f-4278-8d27-685ab57fc0e3", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:35:23.650 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.026s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:35:23.650 7230 ERROR nova.compute.manager >2018-06-28 09:35:23.651 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:35:23.651 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0571730136871 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:35:23.652 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:35:23.652 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:35:23.674 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-e0885289-d1e7-4e8b-8502-8b21b1c5bc0a] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-e0885289-d1e7-4e8b-8502-8b21b1c5bc0a", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:35:23.674 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.022s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:35:23.674 7230 ERROR nova.compute.manager >2018-06-28 09:35:24.675 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:35:25.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:35:25.509 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:35:25.509 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:35:25.510 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:35:25.522 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:36:19.511 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:36:20.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:36:20.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:36:20.501 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:36:20.501 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:36:24.497 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:36:24.497 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:36:25.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:36:25.513 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:36:25.614 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:36:25.614 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:36:25.614 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000752925872803 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:36:25.615 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:36:25.615 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:36:25.652 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-c2fc3625-c290-4aa7-be90-0d8701fe8a5b] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-c2fc3625-c290-4aa7-be90-0d8701fe8a5b", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:36:25.652 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.037s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:36:25.652 7230 ERROR nova.compute.manager >2018-06-28 09:36:25.653 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:36:25.654 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0400099754333 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:36:25.654 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:36:25.654 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:36:25.680 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-b0e92286-bbd9-49ca-b42f-702b452b43c3] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-b0e92286-bbd9-49ca-b42f-702b452b43c3", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:36:25.681 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.026s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:36:25.681 7230 ERROR nova.compute.manager >2018-06-28 09:36:25.681 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:36:25.682 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0681219100952 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:36:25.682 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:36:25.682 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:36:25.690 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-82e157a3-e721-4940-b452-db651dbe94da] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-82e157a3-e721-4940-b452-db651dbe94da", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:36:25.691 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:36:25.691 7230 ERROR nova.compute.manager >2018-06-28 09:36:25.692 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:36:25.692 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:36:25.692 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:36:25.704 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:36:27.691 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:37:12.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:37:12.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7905 >2018-06-28 09:37:15.513 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:37:17.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:37:17.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7862 >2018-06-28 09:37:17.515 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7871 >2018-06-28 09:37:20.515 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:37:20.515 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:37:20.515 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:37:20.516 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:37:22.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:37:25.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:37:25.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:37:25.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:37:25.513 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:37:26.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:37:26.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:37:27.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:37:27.597 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:37:27.597 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:37:27.598 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000718832015991 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:37:27.598 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:37:27.598 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:37:27.805 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-99a3fcf0-3b67-4a9e-b334-d213bf614032] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-99a3fcf0-3b67-4a9e-b334-d213bf614032", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:37:27.806 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.208s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:37:27.806 7230 ERROR nova.compute.manager >2018-06-28 09:37:27.807 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:37:27.807 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.210417032242 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:37:27.808 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:37:27.808 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:37:28.002 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-873a76e9-dc1e-474a-958d-5b2246bbeccf] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-873a76e9-dc1e-474a-958d-5b2246bbeccf", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:37:28.003 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.195s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:37:28.003 7230 ERROR nova.compute.manager >2018-06-28 09:37:28.004 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:37:28.004 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.407301902771 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:37:28.005 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:37:28.005 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:37:28.032 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-46a779d0-bc7e-4793-b9d9-4efcbeb15f3d] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-46a779d0-bc7e-4793-b9d9-4efcbeb15f3d", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:37:28.033 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.028s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:37:28.033 7230 ERROR nova.compute.manager >2018-06-28 09:37:30.032 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:38:20.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:38:22.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:38:22.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:38:22.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:38:23.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:38:25.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:38:26.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:38:26.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:38:27.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:38:27.592 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:38:27.592 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:38:27.592 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000670194625854 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:38:27.593 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:38:27.593 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:38:27.602 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-72e2d0fc-0c81-4399-8773-77946a34ce51] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-72e2d0fc-0c81-4399-8773-77946a34ce51", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:38:27.602 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:38:27.603 7230 ERROR nova.compute.manager >2018-06-28 09:38:27.604 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:38:27.604 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0121262073517 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:38:27.604 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:38:27.604 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:38:27.612 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-3f642544-2127-4947-91e8-888e86c59ea1] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-3f642544-2127-4947-91e8-888e86c59ea1", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:38:27.613 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:38:27.613 7230 ERROR nova.compute.manager >2018-06-28 09:38:27.614 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:38:27.614 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0220730304718 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:38:27.614 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:38:27.614 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:38:27.621 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-4885dbad-3c7c-4698-8888-15158d932e30] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-4885dbad-3c7c-4698-8888-15158d932e30", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:38:27.622 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.007s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:38:27.622 7230 ERROR nova.compute.manager >2018-06-28 09:38:27.623 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:38:27.623 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:38:27.623 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:38:27.634 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:38:30.635 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:39:21.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:39:22.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:39:22.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:39:22.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:39:25.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:39:27.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:39:27.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:39:27.790 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:39:27.791 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:39:27.791 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000712156295776 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:39:27.791 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:39:27.792 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:39:27.802 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-b0e9d7e3-2dcb-4dfa-9036-bb959a10d2c5] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-b0e9d7e3-2dcb-4dfa-9036-bb959a10d2c5", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:39:27.802 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:39:27.802 7230 ERROR nova.compute.manager >2018-06-28 09:39:27.803 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:39:27.803 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0130281448364 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:39:27.804 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:39:27.804 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:39:27.812 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-b8e8c589-9342-4e32-bd10-335c7e0d7daa] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-b8e8c589-9342-4e32-bd10-335c7e0d7daa", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:39:27.812 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:39:27.813 7230 ERROR nova.compute.manager >2018-06-28 09:39:27.813 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:39:27.814 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0231471061707 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:39:27.814 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:39:27.814 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:39:27.822 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-972b909b-9e54-47cb-9ca6-823ab96a62f8] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-972b909b-9e54-47cb-9ca6-823ab96a62f8", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:39:27.823 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:39:27.823 7230 ERROR nova.compute.manager >2018-06-28 09:39:28.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:39:29.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:39:29.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:39:29.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:39:29.513 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:39:32.513 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:40:23.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:40:23.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:40:23.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:40:24.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:40:26.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:40:27.514 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:40:27.803 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:40:27.804 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:40:27.804 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000943183898926 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:40:27.804 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:40:27.805 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:40:27.814 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-6b750b21-f5df-4301-bdf1-2153d4ab596a] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-6b750b21-f5df-4301-bdf1-2153d4ab596a", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:40:27.815 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:40:27.815 7230 ERROR nova.compute.manager >2018-06-28 09:40:27.816 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:40:27.816 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0131950378418 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:40:27.816 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:40:27.817 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:40:27.825 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-248343f1-d9d0-4acd-8845-2f9d8f460db5] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-248343f1-d9d0-4acd-8845-2f9d8f460db5", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:40:27.825 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:40:27.825 7230 ERROR nova.compute.manager >2018-06-28 09:40:27.826 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:40:27.826 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0231871604919 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:40:27.826 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:40:27.827 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:40:27.834 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-ad44336a-bf64-4b58-b65f-0d092ce00c7f] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-ad44336a-bf64-4b58-b65f-0d092ce00c7f", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:40:27.834 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:40:27.835 7230 ERROR nova.compute.manager >2018-06-28 09:40:27.835 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:40:28.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:40:29.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:40:31.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:40:31.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:40:31.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:40:31.512 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:40:33.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:41:23.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:41:23.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:41:23.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:41:25.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:41:27.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:41:27.635 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:41:27.635 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:41:27.636 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000720024108887 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:41:27.636 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:41:27.636 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:41:27.651 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-a681634e-c5bb-4b17-82da-5c88a716b811] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-a681634e-c5bb-4b17-82da-5c88a716b811", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:41:27.651 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.015s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:41:27.651 7230 ERROR nova.compute.manager >2018-06-28 09:41:27.652 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:41:27.652 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0171279907227 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:41:27.652 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:41:27.653 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:41:27.663 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-c2123a83-8854-41fd-bdae-126739a4c6ec] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-c2123a83-8854-41fd-bdae-126739a4c6ec", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:41:27.663 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:41:27.663 7230 ERROR nova.compute.manager >2018-06-28 09:41:27.664 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:41:27.664 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0290570259094 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:41:27.664 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:41:27.665 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:41:27.679 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-ca6cbcbc-bdbf-484a-ab0c-46b13193d313] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-ca6cbcbc-bdbf-484a-ab0c-46b13193d313", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:41:27.680 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.015s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:41:27.680 7230 ERROR nova.compute.manager >2018-06-28 09:41:28.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:41:29.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:41:31.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:41:33.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:41:33.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:41:33.512 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:41:33.512 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:41:33.529 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:42:18.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:42:18.501 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7905 >2018-06-28 09:42:19.514 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:42:23.509 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:42:23.510 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:42:25.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:42:27.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:42:27.600 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:42:27.600 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:42:27.601 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000695943832397 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:42:27.601 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:42:27.601 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:42:27.791 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-50d04b56-a0f1-4d3e-a0de-23f70cafe9b9] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-50d04b56-a0f1-4d3e-a0de-23f70cafe9b9", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:42:27.791 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.190s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:42:27.792 7230 ERROR nova.compute.manager >2018-06-28 09:42:27.793 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:42:27.793 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.193098068237 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:42:27.793 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:42:27.794 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:42:28.024 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-69941522-2fa7-494b-b213-b3923949cbea] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-69941522-2fa7-494b-b213-b3923949cbea", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:42:28.024 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.231s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:42:28.025 7230 ERROR nova.compute.manager >2018-06-28 09:42:28.025 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:42:28.026 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.425932884216 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:42:28.026 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:42:28.026 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:42:28.043 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-48b77001-ae30-4e54-a373-b127b47ade65] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-48b77001-ae30-4e54-a373-b127b47ade65", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:42:28.043 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.017s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:42:28.044 7230 ERROR nova.compute.manager >2018-06-28 09:42:28.044 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:42:28.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:42:29.529 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:42:29.530 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7862 >2018-06-28 09:42:29.556 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7871 >2018-06-28 09:42:29.556 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:42:30.526 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_bandwidth_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:42:31.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:42:33.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:42:33.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:42:33.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:42:33.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:42:33.513 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:42:34.514 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:43:25.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:43:25.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:43:27.501 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:43:28.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:43:29.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:43:29.598 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:43:29.598 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:43:29.599 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000877857208252 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:43:29.599 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:43:29.599 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:43:29.610 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-44b432e9-0093-4118-a49d-ac973d038452] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-44b432e9-0093-4118-a49d-ac973d038452", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:43:29.610 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.011s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:43:29.611 7230 ERROR nova.compute.manager >2018-06-28 09:43:29.612 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:43:29.612 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0141937732697 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:43:29.612 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:43:29.613 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:43:29.621 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-2cf0b92e-662a-4f68-9e2d-451cc5fece6c] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-2cf0b92e-662a-4f68-9e2d-451cc5fece6c", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:43:29.622 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:43:29.622 7230 ERROR nova.compute.manager >2018-06-28 09:43:29.622 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:43:29.623 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0247988700867 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:43:29.623 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:43:29.623 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:43:29.631 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-3c801445-f97a-4a5b-ac89-22be62602853] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-3c801445-f97a-4a5b-ac89-22be62602853", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:43:29.631 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:43:29.632 7230 ERROR nova.compute.manager >2018-06-28 09:43:31.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:43:31.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:43:33.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:43:33.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:43:33.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:43:33.514 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:43:35.514 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:43:35.527 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:44:26.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:44:26.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:44:27.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:44:28.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:44:29.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:44:30.514 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:44:30.810 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:44:30.811 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:44:30.811 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000771999359131 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:44:30.812 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:44:30.812 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:44:30.822 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-def6a1f8-c30a-4e9b-bff7-5d75451921e5] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-def6a1f8-c30a-4e9b-bff7-5d75451921e5", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:44:30.822 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:44:30.822 7230 ERROR nova.compute.manager >2018-06-28 09:44:30.823 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:44:30.823 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0129640102386 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:44:30.824 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:44:30.824 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:44:30.832 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-5fdc93ce-2b23-4e12-8500-dafb818456bb] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-5fdc93ce-2b23-4e12-8500-dafb818456bb", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:44:30.832 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:44:30.832 7230 ERROR nova.compute.manager >2018-06-28 09:44:30.833 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:44:30.833 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0227479934692 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:44:30.833 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:44:30.834 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:44:30.841 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-5b06572a-3b8c-4fc4-865a-90d3f2b74ce7] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-5b06572a-3b8c-4fc4-865a-90d3f2b74ce7", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:44:30.842 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:44:30.842 7230 ERROR nova.compute.manager >2018-06-28 09:44:32.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:44:32.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:44:35.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:44:35.511 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:44:35.511 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:44:35.512 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:44:35.524 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:44:36.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:45:28.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:45:28.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:45:28.501 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:45:29.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:45:32.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:45:32.798 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:45:32.799 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:45:32.799 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000921964645386 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:45:32.799 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:45:32.800 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:45:32.810 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-5fb38dda-ab92-4e53-9fde-81940dca5506] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-5fb38dda-ab92-4e53-9fde-81940dca5506", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:45:32.811 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.011s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:45:32.811 7230 ERROR nova.compute.manager >2018-06-28 09:45:32.812 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:45:32.812 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.014279127121 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:45:32.813 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:45:32.813 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:45:32.824 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-3ec661b8-3812-48f7-a3c4-69ac8769f739] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-3ec661b8-3812-48f7-a3c4-69ac8769f739", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:45:32.824 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.011s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:45:32.824 7230 ERROR nova.compute.manager >2018-06-28 09:45:32.825 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:45:32.825 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0269100666046 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:45:32.825 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:45:32.826 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:45:32.834 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-db2a6ca1-2a9c-492f-8893-a8b9ae2d476b] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-db2a6ca1-2a9c-492f-8893-a8b9ae2d476b", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:45:32.834 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:45:32.834 7230 ERROR nova.compute.manager >2018-06-28 09:45:32.835 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:45:34.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:45:35.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:45:36.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:45:36.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:45:36.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:45:36.513 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:45:38.513 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:46:28.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:46:28.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:46:29.497 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:46:29.514 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:46:29.515 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:46:32.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:46:32.601 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:46:32.601 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:46:32.601 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000776052474976 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:46:32.602 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:46:32.602 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:46:32.613 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-7ef98672-fed8-4067-b04f-bd240797f605] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-7ef98672-fed8-4067-b04f-bd240797f605", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:46:32.614 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.012s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:46:32.614 7230 ERROR nova.compute.manager >2018-06-28 09:46:32.615 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:46:32.616 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.015025138855 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:46:32.616 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:46:32.616 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:46:32.625 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-54ed5112-f759-4ccd-a71a-93a9bef1608a] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-54ed5112-f759-4ccd-a71a-93a9bef1608a", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:46:32.626 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:46:32.626 7230 ERROR nova.compute.manager >2018-06-28 09:46:32.627 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:46:32.627 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0264019966125 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:46:32.627 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:46:32.628 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:46:32.635 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-e4c1d35d-e6d7-44e3-ab9a-e38a25579816] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-e4c1d35d-e6d7-44e3-ab9a-e38a25579816", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:46:32.636 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:46:32.636 7230 ERROR nova.compute.manager >2018-06-28 09:46:33.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:46:35.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:46:37.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:46:37.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:46:37.512 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:46:37.512 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:46:37.525 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:46:40.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:47:25.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:47:25.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7905 >2018-06-28 09:47:29.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:47:29.513 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7862 >2018-06-28 09:47:29.526 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7871 >2018-06-28 09:47:30.513 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:47:30.513 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:47:30.514 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:47:31.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:47:31.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:47:33.509 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:47:33.610 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:47:33.610 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:47:33.610 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.00080394744873 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:47:33.611 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:47:33.611 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:47:33.822 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-eb4336c0-1ef5-4cf4-9fea-3503f26214bb] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-eb4336c0-1ef5-4cf4-9fea-3503f26214bb", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:47:33.822 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.211s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:47:33.823 7230 ERROR nova.compute.manager >2018-06-28 09:47:33.823 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:47:33.824 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.214200019836 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:47:33.824 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:47:33.825 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:47:34.031 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-981f6d31-278b-48cc-9430-235e84e131a4] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-981f6d31-278b-48cc-9430-235e84e131a4", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:47:34.031 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.207s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:47:34.031 7230 ERROR nova.compute.manager >2018-06-28 09:47:34.032 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:47:34.032 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.422857999802 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:47:34.033 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:47:34.033 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:47:34.043 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-dc1529ff-b2da-41f5-a28b-ff1e836c36d7] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-dc1529ff-b2da-41f5-a28b-ff1e836c36d7", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:47:34.043 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:47:34.043 7230 ERROR nova.compute.manager >2018-06-28 09:47:34.044 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:47:36.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:47:38.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:47:38.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:47:38.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:47:38.512 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:47:39.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:47:41.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:48:30.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:48:30.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:48:31.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:48:31.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:48:32.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:48:33.513 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:48:33.613 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:48:33.613 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:48:33.614 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000756025314331 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:48:33.614 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:48:33.614 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:48:33.624 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-73d27550-a61a-4d61-a9dc-e45dfb8d3e25] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-73d27550-a61a-4d61-a9dc-e45dfb8d3e25", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:48:33.624 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:48:33.625 7230 ERROR nova.compute.manager >2018-06-28 09:48:33.626 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:48:33.626 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0128970146179 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:48:33.626 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:48:33.627 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:48:33.635 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-1bde801f-3fc1-4a10-b6bd-f7714626d266] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-1bde801f-3fc1-4a10-b6bd-f7714626d266", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:48:33.635 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:48:33.635 7230 ERROR nova.compute.manager >2018-06-28 09:48:33.636 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:48:33.636 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0232422351837 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:48:33.637 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:48:33.637 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:48:33.645 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-28fc568e-777e-46e2-a1d4-28bc4abac342] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-28fc568e-777e-46e2-a1d4-28bc4abac342", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:48:33.645 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:48:33.645 7230 ERROR nova.compute.manager >2018-06-28 09:48:35.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:48:37.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:48:39.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:48:40.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:48:40.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:48:40.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:48:40.513 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:48:43.513 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:49:31.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:49:32.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:49:32.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:49:32.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:49:35.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:49:35.603 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:49:35.604 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:49:35.604 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000734806060791 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:49:35.604 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:49:35.605 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:49:35.614 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-6b788c77-23db-4445-b01b-14a50ff0fffc] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-6b788c77-23db-4445-b01b-14a50ff0fffc", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:49:35.614 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:49:35.615 7230 ERROR nova.compute.manager >2018-06-28 09:49:35.615 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:49:35.616 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0123229026794 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:49:35.616 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:49:35.616 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:49:35.624 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-46614f31-6d84-4569-b881-08ed4cbe17b8] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-46614f31-6d84-4569-b881-08ed4cbe17b8", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:49:35.625 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:49:35.625 7230 ERROR nova.compute.manager >2018-06-28 09:49:35.626 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:49:35.626 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0224928855896 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:49:35.626 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:49:35.626 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:49:35.634 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-1a32509d-9c06-4d12-b90d-d3cff07d0e1d] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-1a32509d-9c06-4d12-b90d-d3cff07d0e1d", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:49:35.634 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:49:35.634 7230 ERROR nova.compute.manager >2018-06-28 09:49:36.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:49:37.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:49:39.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:49:42.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:49:42.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:49:42.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:49:42.513 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:49:44.514 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:50:32.497 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:50:32.532 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:50:33.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:50:33.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:50:34.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:50:35.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:50:35.812 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:50:35.813 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:50:35.813 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000679016113281 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:50:35.813 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:50:35.814 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:50:35.823 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-159c312d-50c1-4d86-ab39-11db75cb761d] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-159c312d-50c1-4d86-ab39-11db75cb761d", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:50:35.823 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:50:35.824 7230 ERROR nova.compute.manager >2018-06-28 09:50:35.825 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:50:35.825 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0124261379242 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:50:35.825 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:50:35.825 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:50:35.835 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-a90104c3-1905-43e1-994d-190158e061f7] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-a90104c3-1905-43e1-994d-190158e061f7", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:50:35.835 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:50:35.835 7230 ERROR nova.compute.manager >2018-06-28 09:50:35.836 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:50:35.836 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0238461494446 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:50:35.837 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:50:35.837 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:50:35.848 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-87fab69a-1027-44a6-b2ac-40c08bcf133d] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-87fab69a-1027-44a6-b2ac-40c08bcf133d", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:50:35.849 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.012s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:50:35.849 7230 ERROR nova.compute.manager >2018-06-28 09:50:37.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:50:37.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:50:41.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:50:43.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:50:43.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:50:43.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:50:43.512 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:50:44.513 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:51:33.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:51:33.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:51:34.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:51:34.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:51:36.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:51:36.804 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:51:36.804 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:51:36.805 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000872135162354 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:51:36.805 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:51:36.805 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:51:36.816 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-97e43eca-1562-45a2-a0fb-0d32fc994efc] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-97e43eca-1562-45a2-a0fb-0d32fc994efc", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:51:36.816 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.011s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:51:36.816 7230 ERROR nova.compute.manager >2018-06-28 09:51:36.817 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:51:36.818 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0138580799103 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:51:36.818 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:51:36.818 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:51:36.828 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-b442fe36-b076-4fe7-8d87-47bb46a3dfe4] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-b442fe36-b076-4fe7-8d87-47bb46a3dfe4", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:51:36.828 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:51:36.828 7230 ERROR nova.compute.manager >2018-06-28 09:51:36.829 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:51:36.829 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0255980491638 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:51:36.830 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:51:36.830 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:51:36.838 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-d208ffc8-3359-48cd-9af8-8ba0f1ddec09] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-d208ffc8-3359-48cd-9af8-8ba0f1ddec09", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:51:36.839 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:51:36.839 7230 ERROR nova.compute.manager >2018-06-28 09:51:38.497 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:51:38.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:51:43.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:51:43.511 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:51:43.511 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:51:43.511 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:51:43.523 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:51:45.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:52:10.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:52:32.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_bandwidth_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:52:33.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:52:33.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:52:34.497 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:52:35.515 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:52:36.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:52:37.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:52:37.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7905 >2018-06-28 09:52:38.510 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:52:38.607 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:52:38.607 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:52:38.607 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000699996948242 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:52:38.608 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:52:38.608 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:52:38.814 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-90ec3bf2-c148-48b4-9691-72906009b8dd] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-90ec3bf2-c148-48b4-9691-72906009b8dd", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:52:38.815 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.207s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:52:38.815 7230 ERROR nova.compute.manager >2018-06-28 09:52:38.816 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:52:38.816 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.209461927414 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:52:38.816 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:52:38.817 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:52:39.025 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-5ae4c89f-2ced-4dfc-b799-8c018cebea44] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-5ae4c89f-2ced-4dfc-b799-8c018cebea44", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:52:39.025 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.208s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:52:39.025 7230 ERROR nova.compute.manager >2018-06-28 09:52:39.026 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:52:39.026 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.41955780983 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:52:39.027 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:52:39.027 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:52:39.036 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-d8919728-0a93-4209-ad44-d2ba4c3e85ed] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-d8919728-0a93-4209-ad44-d2ba4c3e85ed", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:52:39.036 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:52:39.036 7230 ERROR nova.compute.manager >2018-06-28 09:52:39.037 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:52:39.046 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:52:39.047 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7862 >2018-06-28 09:52:39.060 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7871 >2018-06-28 09:52:39.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:52:40.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:52:43.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:52:43.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:52:43.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:52:43.511 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:52:44.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:52:46.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:53:34.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:53:34.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:53:35.500 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:53:36.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:53:39.497 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:53:40.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:53:40.608 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:53:40.608 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:53:40.609 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000738143920898 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:53:40.609 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:53:40.609 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:53:40.620 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-23759a8a-8b45-475c-9d18-4f14f08fe97e] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-23759a8a-8b45-475c-9d18-4f14f08fe97e", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:53:40.620 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.011s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:53:40.620 7230 ERROR nova.compute.manager >2018-06-28 09:53:40.621 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:53:40.622 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0138709545135 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:53:40.622 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:53:40.622 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:53:40.631 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-4903f1fd-1efe-4d62-9e8a-9e5e71139154] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-4903f1fd-1efe-4d62-9e8a-9e5e71139154", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:53:40.631 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:53:40.632 7230 ERROR nova.compute.manager >2018-06-28 09:53:40.632 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:53:40.632 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0244770050049 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:53:40.633 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:53:40.633 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:53:40.641 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-5868c2f8-d7e1-4983-a353-e759e0d9b22f] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-5868c2f8-d7e1-4983-a353-e759e0d9b22f", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:53:40.642 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:53:40.642 7230 ERROR nova.compute.manager >2018-06-28 09:53:40.643 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:53:44.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:53:44.511 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:53:44.512 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:53:44.512 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:53:44.524 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:53:48.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:54:34.497 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:54:34.514 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:54:34.514 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:54:37.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:54:37.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:54:39.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:54:40.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:54:40.603 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:54:40.603 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:54:40.604 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000801801681519 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:54:40.604 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:54:40.604 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:54:40.615 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-78328285-9951-404d-b3ed-89bb59f94bcc] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-78328285-9951-404d-b3ed-89bb59f94bcc", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:54:40.615 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.011s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:54:40.616 7230 ERROR nova.compute.manager >2018-06-28 09:54:40.617 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:54:40.617 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0140769481659 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:54:40.617 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:54:40.618 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:54:40.626 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-5cdba52f-2bde-4e2a-a390-3b5f1df28827] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-5cdba52f-2bde-4e2a-a390-3b5f1df28827", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:54:40.627 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:54:40.627 7230 ERROR nova.compute.manager >2018-06-28 09:54:40.627 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:54:40.628 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0248429775238 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:54:40.628 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:54:40.628 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:54:40.636 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-505791c1-d63f-48c4-ad95-7da6f1e9b64b] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-505791c1-d63f-48c4-ad95-7da6f1e9b64b", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:54:40.636 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:54:40.636 7230 ERROR nova.compute.manager >2018-06-28 09:54:41.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:54:46.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:54:46.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:54:46.512 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:54:46.512 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:54:46.525 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:54:49.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:55:35.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:55:35.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:55:38.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:55:39.497 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:55:39.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:55:40.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:55:40.805 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:55:40.805 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:55:40.806 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000707149505615 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:55:40.806 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:55:40.806 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:55:40.817 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-eb7e0c73-567f-4000-87d8-ed58bc40bacd] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-eb7e0c73-567f-4000-87d8-ed58bc40bacd", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:55:40.817 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.011s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:55:40.818 7230 ERROR nova.compute.manager >2018-06-28 09:55:40.819 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:55:40.819 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0140199661255 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:55:40.819 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:55:40.820 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:55:40.829 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-d1e26551-0fec-41d0-a521-93eb58d92d98] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-d1e26551-0fec-41d0-a521-93eb58d92d98", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:55:40.830 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:55:40.830 7230 ERROR nova.compute.manager >2018-06-28 09:55:40.831 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:55:40.831 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0261390209198 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:55:40.832 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:55:40.832 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:55:40.841 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-e6983229-e610-4ca2-ba63-87932df87666] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-e6983229-e610-4ca2-ba63-87932df87666", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:55:40.841 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:55:40.841 7230 ERROR nova.compute.manager >2018-06-28 09:55:43.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:55:46.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:55:46.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:55:46.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:55:46.512 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:55:48.513 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:55:50.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:56:36.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:56:36.514 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:56:36.515 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:56:39.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:56:41.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:56:41.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:56:42.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:56:42.601 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:56:42.601 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:56:42.601 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000749111175537 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:56:42.602 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:56:42.602 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:56:42.612 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-501a4b13-50d9-4184-8704-eda6f9ede043] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-501a4b13-50d9-4184-8704-eda6f9ede043", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:56:42.612 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:56:42.613 7230 ERROR nova.compute.manager >2018-06-28 09:56:42.614 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:56:42.614 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0133891105652 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:56:42.614 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:56:42.615 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:56:42.623 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-27e6c28d-8aea-4e05-b9b4-a84be995fd10] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-27e6c28d-8aea-4e05-b9b4-a84be995fd10", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:56:42.624 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:56:42.624 7230 ERROR nova.compute.manager >2018-06-28 09:56:42.625 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:56:42.625 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.024365901947 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:56:42.625 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:56:42.625 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:56:42.633 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-9864fba6-e2f0-4cc0-b26f-870293612b09] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-9864fba6-e2f0-4cc0-b26f-870293612b09", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:56:42.633 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:56:42.634 7230 ERROR nova.compute.manager >2018-06-28 09:56:44.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:56:48.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:56:48.512 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:56:48.513 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:56:48.513 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:56:48.526 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:56:51.513 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:57:36.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:57:36.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:57:41.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:57:42.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:57:42.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7862 >2018-06-28 09:57:42.515 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7871 >2018-06-28 09:57:43.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:57:43.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:57:44.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:57:44.795 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:57:44.795 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:57:44.796 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000666856765747 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:57:44.796 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:57:44.796 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:57:45.002 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-47bf7ebf-5804-41fb-8e3f-61ec54786c0c] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-47bf7ebf-5804-41fb-8e3f-61ec54786c0c", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:57:45.003 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.206s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:57:45.003 7230 ERROR nova.compute.manager >2018-06-28 09:57:45.004 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:57:45.004 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.20916891098 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:57:45.005 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:57:45.005 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:57:45.185 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-9e9a85c0-0733-49a5-b4de-c6dbd979fc77] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-9e9a85c0-0733-49a5-b4de-c6dbd979fc77", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:57:45.185 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.181s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:57:45.186 7230 ERROR nova.compute.manager >2018-06-28 09:57:45.186 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:57:45.187 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.391558885574 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:57:45.187 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:57:45.187 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:57:45.196 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-a65245a5-3b37-4dc5-9f4c-d1e649c27a07] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-a65245a5-3b37-4dc5-9f4c-d1e649c27a07", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:57:45.196 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:57:45.196 7230 ERROR nova.compute.manager >2018-06-28 09:57:45.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:57:49.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:57:49.510 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:57:49.511 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:57:49.511 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:57:49.524 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:57:50.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:57:50.508 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:57:50.509 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7905 >2018-06-28 09:57:51.519 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:58:36.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:58:36.501 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 09:58:39.496 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:58:43.516 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:58:43.516 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:58:44.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:58:45.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:58:46.499 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:58:46.602 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:58:46.603 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:58:46.603 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000782012939453 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:58:46.603 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:58:46.604 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:58:46.616 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-dc4f3b56-ac81-4b3b-b2bd-a159a7987eb0] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-dc4f3b56-ac81-4b3b-b2bd-a159a7987eb0", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:58:46.617 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.013s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:58:46.617 7230 ERROR nova.compute.manager >2018-06-28 09:58:46.618 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:58:46.619 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0165529251099 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:58:46.619 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:58:46.619 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:58:46.630 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-8c6ac156-8f5c-455f-855d-e87217d20b8c] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-8c6ac156-8f5c-455f-855d-e87217d20b8c", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:58:46.631 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.011s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:58:46.631 7230 ERROR nova.compute.manager >2018-06-28 09:58:46.631 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:58:46.632 7230 DEBUG nova.virt.ironic.driver [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0296850204468 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:58:46.632 7230 DEBUG nova.compute.resource_tracker [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:58:46.632 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:58:46.640 7230 ERROR nova.scheduler.client.report [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] [req-d1d586bc-33be-491a-b8ec-0c49c38fd1e9] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-d1d586bc-33be-491a-b8ec-0c49c38fd1e9", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:58:46.640 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:58:46.640 7230 ERROR nova.compute.manager >2018-06-28 09:58:49.640 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:58:50.498 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:58:50.499 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 09:58:50.500 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 09:58:50.513 7230 DEBUG nova.compute.manager [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 09:58:52.514 7230 DEBUG oslo_service.periodic_task [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 09:59:11.836 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Acquired semaphore "singleton_lock" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-06-28 09:59:11.837 7230 DEBUG oslo_concurrency.lockutils [req-e61ac1b4-5cda-4f61-88ad-7418f20cec19 - - - - -] Releasing semaphore "singleton_lock" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-06-28 09:59:14.608 4269 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python2.7/site-packages/os_vif/__init__.py:46 >2018-06-28 09:59:14.609 4269 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python2.7/site-packages/os_vif/__init__.py:46 >2018-06-28 09:59:14.610 4269 INFO os_vif [-] Loaded VIF plugins: ovs, linux_bridge >2018-06-28 09:59:14.640 4269 INFO oslo_service.periodic_task [-] Skipping periodic task _sync_power_states because its interval is negative >2018-06-28 09:59:14.767 4269 INFO nova.virt.driver [-] Loading compute driver 'ironic.IronicDriver' >2018-06-28 09:59:14.803 4269 WARNING oslo_config.cfg [-] Option "firewall_driver" from group "DEFAULT" is deprecated for removal ( >nova-network is deprecated, as are any related configuration options. >). Its value may be silently ignored in the future. >2018-06-28 09:59:14.852 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Acquired semaphore "singleton_lock" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-06-28 09:59:14.853 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Releasing semaphore "singleton_lock" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-06-28 09:59:14.854 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python2.7/site-packages/oslo_service/service.py:366 >2018-06-28 09:59:14.855 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ******************************************************************************** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2890 >2018-06-28 09:59:14.855 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2891 >2018-06-28 09:59:14.855 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] command line args: [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2892 >2018-06-28 09:59:14.856 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] config files: ['/usr/share/nova/nova-dist.conf', '/etc/nova/nova.conf'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2894 >2018-06-28 09:59:14.856 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ================================================================================ log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2895 >2018-06-28 09:59:14.856 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.857 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] allow_same_net_traffic = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.857 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] auto_assign_floating_ip = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.857 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] backdoor_port = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.857 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] backdoor_socket = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.858 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] bandwidth_poll_interval = 600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.858 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] bindir = /usr/local/bin log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.858 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.859 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.859 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cert = self.pem log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.859 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cnt_vpn_clients = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.859 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] compute_driver = ironic.IronicDriver log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.860 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] compute_monitors = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.860 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] config_dir = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.860 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.861 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] config_file = ['/usr/share/nova/nova-dist.conf', '/etc/nova/nova.conf'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.861 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] console_host = undercloud-0.redhat.local log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.861 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] control_exchange = nova log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.861 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cpu_allocation_ratio = 0.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.862 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] create_unique_mac_address_attempts = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.862 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] daemon = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.862 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] debug = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.863 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.863 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.863 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.864 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] default_flavor = m1.small log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.864 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] default_floating_pool = nova log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.864 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.865 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.865 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] defer_iptables_apply = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.865 4269 WARNING oslo_config.cfg [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Option "dhcp_domain" from group "DEFAULT" is deprecated for removal ( >nova-network is deprecated, as are any related configuration options. >). Its value may be silently ignored in the future. >2018-06-28 09:59:14.865 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] dhcp_domain = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.866 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] dhcp_lease_time = 86400 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.866 4269 WARNING oslo_config.cfg [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Option "dhcpbridge" from group "DEFAULT" is deprecated for removal ( >nova-network is deprecated, as are any related configuration options. >). Its value may be silently ignored in the future. >2018-06-28 09:59:14.866 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] dhcpbridge = /usr/bin/nova-dhcpbridge log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.867 4269 WARNING oslo_config.cfg [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Option "dhcpbridge_flagfile" from group "DEFAULT" is deprecated for removal ( >nova-network is deprecated, as are any related configuration options. >). Its value may be silently ignored in the future. >2018-06-28 09:59:14.867 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] dhcpbridge_flagfile = ['/usr/share/nova/nova-dist.conf', '/etc/nova/nova.conf'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.867 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] disk_allocation_ratio = 0.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.867 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] dmz_cidr = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.868 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] dns_server = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.868 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] dns_update_periodic_interval = -1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.868 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] dnsmasq_config_file = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.869 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ebtables_exec_attempts = 3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.869 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ebtables_retry_interval = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.869 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] enable_network_quota = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.870 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] enable_new_services = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.870 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] enabled_apis = ['metadata'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.870 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.870 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] fake_network = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.871 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] firewall_driver = nova.virt.firewall.NoopFirewallDriver log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.871 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] fixed_ip_disassociate_timeout = 600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.871 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] fixed_range_v6 = fd00::/48 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.872 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] flat_injected = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.872 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] flat_interface = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.872 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] flat_network_bridge = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.873 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] flat_network_dns = 8.8.4.4 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.873 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] floating_ip_dns_manager = nova.network.noop_dns_driver.NoopDNSDriver log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.873 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] force_config_drive = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.873 4269 WARNING oslo_config.cfg [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Option "force_dhcp_release" from group "DEFAULT" is deprecated for removal ( >nova-network is deprecated, as are any related configuration options. >). Its value may be silently ignored in the future. >2018-06-28 09:59:14.874 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] force_dhcp_release = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.874 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] force_raw_images = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.874 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] force_snat_range = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.875 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] forward_bridge_interface = ['all'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.875 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] gateway = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.875 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] gateway_v6 = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.875 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.876 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.876 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] host = undercloud-0.redhat.local log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.876 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] image_cache_manager_interval = 2400 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.877 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] image_cache_subdirectory_name = _base log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.877 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] injected_network_template = /usr/share/nova/interfaces.template log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.877 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.878 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.878 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] instance_dns_domain = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.878 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] instance_dns_manager = nova.network.noop_dns_driver.NoopDNSDriver log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.878 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.879 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.879 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] instance_usage_audit = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.879 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] instance_usage_audit_period = hour log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.880 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.880 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.881 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.881 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] iptables_bottom_regex = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.881 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] iptables_drop_action = DROP log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.881 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] iptables_top_regex = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.882 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ipv6_backend = rfc2462 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.882 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] key = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.882 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] l3_lib = nova.network.l3.LinuxNetL3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.883 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ldap_dns_base_dn = ou=hosts,dc=example,dc=org log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.883 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ldap_dns_password = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.883 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ldap_dns_servers = ['dns.example.org'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.884 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ldap_dns_soa_expiry = 86400 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.884 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ldap_dns_soa_hostmaster = hostmaster@example.org log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.884 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ldap_dns_soa_minimum = 7200 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.885 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ldap_dns_soa_refresh = 1800 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.885 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ldap_dns_soa_retry = 3600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.885 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ldap_dns_url = ldap://ldap.example.com:389 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.886 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ldap_dns_user = uid=admin,ou=people,dc=example,dc=org log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.886 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] linuxnet_interface_driver = nova.network.linux_net.LinuxBridgeInterfaceDriver log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.886 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] linuxnet_ovs_integration_bridge = br-int log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.886 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.887 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] log_config_append = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.887 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.887 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] log_dir = /var/log/nova log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.888 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] log_file = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.888 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] log_options = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.888 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.888 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.889 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.889 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.889 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.889 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.890 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] max_concurrent_builds = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.890 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.890 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.890 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.891 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] metadata_host = 172.16.0.4 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.891 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] metadata_listen = 192.168.24.1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.891 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.892 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] metadata_port = 8775 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.892 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] metadata_workers = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.892 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.893 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] mkisofs_cmd = genisoimage log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.893 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] multi_host = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.893 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] my_block_storage_ip = 172.16.0.4 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.894 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] my_ip = 172.16.0.4 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.894 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.894 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] network_driver = nova.network.linux_net log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.894 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] network_manager = nova.network.manager.VlanManager log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.895 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] network_size = 256 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.895 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] networks_path = /var/lib/nova/networks log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.895 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent', 'img_signature_hash_method', 'img_signature', 'img_signature_key_type', 'img_signature_certificate_uuid'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.896 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] num_networks = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.896 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] osapi_compute_listen = 192.168.24.1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.896 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.898 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.898 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] osapi_compute_workers = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.898 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.898 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] password_length = 12 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.899 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] periodic_enable = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.899 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.899 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.899 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] preallocate_images = none log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.900 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] public_interface = eth0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.900 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] publish_errors = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.900 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] pybasedir = /usr/lib/python2.7/site-packages log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.901 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota_networks = 3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.901 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.901 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.901 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.902 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.902 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.902 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.902 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] record = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.903 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] remove_unused_base_images = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.903 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.903 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] report_interval = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.904 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.904 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.904 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.904 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] reserved_host_memory_mb = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.905 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.905 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.905 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.906 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.906 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.906 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] routing_source_ip = 172.16.0.4 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.906 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] rpc_backend = rabbit log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.907 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] rpc_response_timeout = 600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.907 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.907 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.908 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.908 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.908 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.909 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] send_arp_for_ha = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.909 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] send_arp_for_ha_count = 3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.909 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] service_down_time = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.909 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.910 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] share_dhcp_address = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.910 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.910 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.910 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.911 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.911 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ssl_only = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.911 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.912 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] sync_power_state_interval = -1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.912 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.912 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.912 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] teardown_unused_network_gateway = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.913 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] tempdir = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.913 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.913 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] transport_url = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.914 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] update_dns_entries = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.914 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.914 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] use_cow_images = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.914 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] use_ipv6 = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.915 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] use_journal = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.915 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] use_json = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.915 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] use_network_dns_servers = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.916 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] use_neutron = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.916 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.916 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] use_single_default_gateway = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.916 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] use_stderr = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.917 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] use_syslog = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.917 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.917 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.918 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.918 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.918 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vlan_interface = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.918 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vlan_start = 100 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.919 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.919 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vpn_ip = 172.16.0.4 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.920 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vpn_start = 1000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.920 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] watch_log_file = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.920 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2904 >2018-06-28 09:59:14.920 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.921 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.921 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.921 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.922 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.922 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.922 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.923 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.923 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.923 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.923 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.924 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.924 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.924 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.925 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vif_plug_ovs_privileged.capabilities = [12] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.925 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.925 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.925 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.926 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] powervm.disk_driver = localdisk log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.926 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] powervm.proc_units_factor = 0.1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.926 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] powervm.volume_group_name = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.927 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.927 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.927 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.928 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.928 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.928 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.929 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api.hide_server_address_states = ['building'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.929 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.929 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.929 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.930 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.930 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.930 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.931 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.931 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.931 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.931 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.932 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.932 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.932 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.933 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.933 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.933 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.available_filters = ['tripleo_common.filters.list.tripleo_filters'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.934 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.934 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.934 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.935 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.enabled_filters = ['RetryFilter', 'TripleOCapabilitiesFilter', 'ComputeCapabilitiesFilter', 'AvailabilityZoneFilter', 'RamFilter', 'DiskFilter', 'ComputeFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.935 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.935 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.936 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.936 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.936 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.937 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.937 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.937 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.937 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.938 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.938 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.938 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.939 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.939 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.939 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.940 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.940 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.940 4269 WARNING oslo_config.cfg [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Option "api_endpoint" from group "ironic" is deprecated for removal (Endpoint lookup uses the service catalog via common keystoneauth1 Adapter configuration options. In the current release, api_endpoint will override this behavior, but will be ignored and/or removed in a future release. To achieve the same result, use the endpoint_override option instead.). Its value may be silently ignored in the future. >2018-06-28 09:59:14.941 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.api_endpoint = https://192.168.24.2:13385/v1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.941 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.941 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.942 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.942 4269 WARNING oslo_config.cfg [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Option "auth_plugin" from group "ironic" is deprecated. Use option "auth_type" from group "ironic". >2018-06-28 09:59:14.942 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.auth_type = password log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.942 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.cafile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.943 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.certfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.943 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.943 4269 WARNING oslo_config.cfg [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Option "api_endpoint" from group "ironic" is deprecated. Use option "endpoint-override" from group "ironic". >2018-06-28 09:59:14.944 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.endpoint_override = https://192.168.24.2:13385/v1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.944 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.insecure = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.944 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.944 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.max_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.945 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.min_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.945 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.region_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.945 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.946 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.service_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.946 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.946 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.946 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.947 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.947 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ironic.version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.948 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.948 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.948 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.948 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] barbican.barbican_endpoint_type = public log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.949 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.949 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.949 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.950 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.allowed_direct_url_schemes = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.950 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.api_servers = ['http://192.168.24.3:9292'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.950 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.cafile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.950 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.certfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.951 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.951 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.debug = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.951 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.951 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.952 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.952 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.insecure = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.952 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.keyfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.953 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.max_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.953 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.min_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.953 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.num_retries = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.953 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.region_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.954 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.service_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.954 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.service_type = image log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.954 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.954 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.955 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.955 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.955 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] glance.version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.956 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.956 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.956 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.956 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.957 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.957 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.957 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.958 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.958 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.958 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.958 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.host_username = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.959 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.insecure = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.959 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.959 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.959 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.960 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.960 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.960 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.961 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.961 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.961 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.962 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.962 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.vlan_interface = vmnic0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.962 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.962 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.963 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.963 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.963 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.963 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.964 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.964 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.fake_rabbit = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.964 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.965 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.965 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.965 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.966 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.966 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.966 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.966 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.rabbit_host = localhost log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.967 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.rabbit_hosts = ['localhost:5672'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.967 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.967 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.968 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.rabbit_max_retries = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.968 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.rabbit_password = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.968 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.rabbit_port = 5672 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.969 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.969 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.969 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.970 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.970 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.rabbit_userid = guest log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.970 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.rabbit_virtual_host = / log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.970 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.971 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.971 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.971 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.972 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.972 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.972 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.973 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xvp.console_xvp_conf = /etc/xvp.conf log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.973 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xvp.console_xvp_conf_template = /usr/lib/python2.7/site-packages/nova/console/xvp.conf.template log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.973 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xvp.console_xvp_log = /var/log/xvp.log log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.974 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xvp.console_xvp_multiplex_port = 5900 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.974 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xvp.console_xvp_pid = /var/run/xvp.pid log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.974 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.975 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.975 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.975 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.975 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.976 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.976 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.976 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.977 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.977 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cache.backend = dogpile.cache.null log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.977 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.977 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.978 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.978 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cache.enabled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.978 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.979 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.979 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.979 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.979 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.980 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.980 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cache.memcache_socket_timeout = 3.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.980 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cache.proxies = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.981 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.981 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.981 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.981 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.agent_path = usr/sbin/xe-update-networking log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.982 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.agent_resetnetwork_timeout = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.982 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.agent_timeout = 30 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.982 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.agent_version_timeout = 300 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.983 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.block_device_creation_timeout = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.983 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.cache_images = all log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.983 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.check_host = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.983 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.connection_concurrent = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.984 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.connection_password = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.984 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.connection_url = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.984 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.connection_username = root log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.985 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.console_public_hostname = undercloud-0.redhat.local log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.985 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.default_os_type = linux log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.985 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.disable_agent = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.985 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.image_compression_level = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.986 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.image_handler = direct_vhd log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.986 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.image_upload_handler = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.986 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.independent_compute = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.987 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.introduce_vdi_retry_wait = 20 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.987 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.ipxe_boot_menu_url = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.987 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.ipxe_mkisofs_cmd = mkisofs log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.987 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.ipxe_network_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.988 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.login_timeout = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.988 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.max_kernel_ramdisk_size = 16777216 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.988 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.num_vbd_unplug_retries = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.989 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.ovs_integration_bridge = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.989 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.running_timeout = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.989 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.sparse_copy = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.990 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.sr_base_path = /var/run/sr-mount log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.990 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.sr_matching_filter = default-sr:true log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.990 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.target_host = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.990 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.target_port = 3260 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.991 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.use_agent_default = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.991 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.use_join_force = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.991 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.vhd_coalesce_max_attempts = 20 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.992 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.vhd_coalesce_poll_interval = 5.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.992 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] xenserver.vif_driver = nova.virt.xenapi.vif.XenAPIOpenVswitchDriver log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.992 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.993 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.993 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.993 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.994 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] pci.alias = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.994 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] pci.passthrough_whitelist = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.994 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] mks.enabled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.995 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.995 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement_database.connection = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.996 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement_database.connection_debug = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.996 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement_database.connection_parameters = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.996 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.997 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement_database.connection_trace = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.997 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement_database.max_overflow = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.997 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement_database.max_pool_size = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.998 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement_database.max_retries = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.998 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.998 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement_database.pool_timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.998 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement_database.retry_interval = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.999 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement_database.slave_connection = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.999 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement_database.sqlite_synchronous = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:14.999 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] keystone.cafile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.000 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] keystone.certfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.000 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.000 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.001 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] keystone.insecure = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.001 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.001 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] keystone.max_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.002 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] keystone.min_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.002 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] keystone.region_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.002 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] keystone.service_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.002 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.003 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.003 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] keystone.timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.003 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.004 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] keystone.version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.004 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.004 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.005 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.auth_type = v3password log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.005 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.cafile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.005 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.certfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.006 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.006 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.006 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.006 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.007 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.insecure = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.007 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.007 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.max_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.008 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.008 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.min_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.008 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.008 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.region_name = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.009 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.service_metadata_proxy = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.009 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.service_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.009 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.service_type = network log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.010 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.010 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.timeout = 30 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.010 4269 WARNING oslo_config.cfg [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Option "url" from group "neutron" is deprecated for removal (Endpoint lookup uses the service catalog via common keystoneauth1 Adapter configuration options. In the current release, "url" will override this behavior, but will be ignored and/or removed in a future release. To achieve the same result, use the endpoint_override option instead.). Its value may be silently ignored in the future. >2018-06-28 09:59:15.011 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.url = https://192.168.24.2:13696 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.011 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.011 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] neutron.version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.011 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.012 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.012 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.012 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.013 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.013 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vnc.enabled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.013 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vnc.keymap = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.013 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vnc.novncproxy_base_url = http://127.0.0.1:6080/vnc_auto.html log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.014 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.014 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.014 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vnc.server_listen = 127.0.0.1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.015 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vnc.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.015 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.015 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.015 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.016 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vnc.xvpvncproxy_base_url = http://127.0.0.1:6081/console log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.016 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vnc.xvpvncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.016 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vnc.xvpvncproxy_port = 6081 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.017 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] conductor.workers = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.017 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_notifications.driver = ['messaging'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.017 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.018 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.018 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.018 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.019 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.019 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.019 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.019 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.020 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.020 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota.cores = 20 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.020 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.021 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota.fixed_ips = -1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.021 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota.floating_ips = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.021 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.021 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.022 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.022 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota.instances = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.022 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.023 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota.max_age = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.023 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.023 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.023 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.024 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota.reservation_expire = 86400 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.024 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota.security_group_rules = 20 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.024 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota.security_groups = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.025 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.025 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.025 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] quota.until_refresh = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.025 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.checksum_base_images = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.026 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.checksum_interval_seconds = 3600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.026 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.026 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.cpu_mode = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.026 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.cpu_model = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.027 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.027 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.027 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.028 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.028 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.028 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.028 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.029 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.hw_machine_type = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.029 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.image_info_filename_pattern = /var/lib/nova/instances/_base/%(image)s.info log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.029 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.images_rbd_ceph_conf = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.030 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.images_rbd_pool = rbd log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.030 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.images_type = default log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.030 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.031 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.031 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.031 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.031 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.032 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.032 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.032 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.032 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.033 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.033 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.033 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.034 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.live_migration_permit_auto_converge = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.034 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.live_migration_permit_post_copy = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.034 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.live_migration_progress_timeout = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.034 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.035 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.035 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.live_migration_uri = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.035 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.036 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.036 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.036 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.037 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.037 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.num_nvme_discover_tries = 3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.038 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.num_pcie_ports = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.038 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.038 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.039 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.039 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.rbd_secret_uuid = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.039 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.rbd_user = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.040 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.040 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.040 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.040 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.041 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.041 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.041 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.rng_dev_path = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.042 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.rx_queue_size = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.042 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.042 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.043 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.043 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.043 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.044 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.044 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.sysinfo_serial = auto log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.044 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.tx_queue_size = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.044 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.045 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.use_usb_tablet = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.045 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.045 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.046 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.046 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.046 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.volume_use_multipath = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.047 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.047 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.047 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.047 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.048 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.048 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.048 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.049 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.049 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] libvirt.xen_hvmloader_path = /usr/lib/xen/boot/hvmloader log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.049 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] metrics.required = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.050 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.050 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.050 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.050 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.051 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.051 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.051 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] notifications.notify_on_state_change = vm_and_task_state log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.052 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.052 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.052 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.052 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.053 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] scheduler.driver = filter_scheduler log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.053 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.053 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] scheduler.max_attempts = 30 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.054 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.054 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] scheduler.periodic_task_interval = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.054 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.054 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] scheduler.query_placement_for_availability_zone = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.055 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] scheduler.workers = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.055 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.055 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.056 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.056 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.056 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.auth_section = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.057 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.auth_type = password log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.057 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.cafile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.057 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.certfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.057 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.058 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.058 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.incomplete_consumer_project_id = 00000000-0000-0000-0000-0000000000000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.059 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.incomplete_consumer_user_id = 00000000-0000-0000-0000-0000000000000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.059 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.insecure = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.059 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.keyfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.060 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.max_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.060 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.min_version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.060 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.policy_file = placement-policy.yaml log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.060 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.randomize_allocation_candidates = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.061 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.061 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.service_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.061 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.service_type = placement log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.062 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.062 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.062 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.063 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] placement.version = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.063 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] remote_debug.host = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.063 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] remote_debug.port = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.063 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.064 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.064 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.064 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.065 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.065 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.065 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.066 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.066 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.066 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] spice.enabled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.067 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.067 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.067 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.067 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] spice.keymap = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.068 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.068 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.068 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.069 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] service_user.auth_type = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.069 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] service_user.cafile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.069 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] service_user.certfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.070 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.070 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] service_user.insecure = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.070 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.070 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] service_user.send_service_user_token = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.071 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.071 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] service_user.timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.071 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.072 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.072 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.072 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.072 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.073 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.073 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.073 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.074 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.074 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.074 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.074 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.075 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.075 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.075 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.076 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.076 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.076 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.077 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] compute.consecutive_build_service_disable_threshold = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.077 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.077 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] compute.live_migration_wait_for_vif_plug = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.077 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.078 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.078 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] rdp.enabled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.078 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.079 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] guestfs.debug = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.079 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.079 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.connection = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.080 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.080 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.connection_parameters = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.080 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.080 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.connection_trace = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.081 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.081 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.081 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.082 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.082 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.082 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.082 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.max_retries = -1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.083 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.min_pool_size = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.083 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.083 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.084 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.084 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.084 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.085 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.085 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.use_db_reconnect = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.085 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] database.use_tpool = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.085 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.086 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.086 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.086 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] workarounds.enable_consoleauth = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.086 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.087 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.bandwidth_update_interval = 600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.087 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.call_timeout = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.087 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.capabilities = ['hypervisor=xenserver;kvm', 'os=linux;windows'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.088 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.cell_type = compute log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.088 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.cells_config = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.088 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.db_check_interval = 60 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.088 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.enable = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.089 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.instance_update_num_instances = 1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.089 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.instance_update_sync_database_limit = 100 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.089 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.instance_updated_at_threshold = 3600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.090 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.max_hop_count = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.090 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.mute_child_interval = 300 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.090 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.mute_weight_multiplier = -10000.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.090 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.name = nova log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.091 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.offset_weight_multiplier = 1.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.091 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.ram_weight_multiplier = 10.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.091 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.reserve_percent = 10.0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.091 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.rpc_driver_queue_base = cells.intercell log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.092 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.scheduler = nova.cells.scheduler.CellsScheduler log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.092 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.scheduler_filter_classes = ['nova.cells.filters.all_filters'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.092 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.scheduler_retries = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.093 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.scheduler_retry_delay = 2 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.093 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cells.scheduler_weight_classes = ['nova.cells.weights.all_weighers'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.093 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api_database.connection = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.093 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.094 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.094 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.094 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.095 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api_database.max_overflow = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.095 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api_database.max_pool_size = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.095 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.095 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.096 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.096 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.096 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.096 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.097 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] devices.enabled_vgpu_types = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.097 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] profiler.connection_string = messaging:// log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.097 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] profiler.enabled = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.098 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] profiler.es_doc_type = notification log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.098 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.098 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.098 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] profiler.filter_error_trace = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.099 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.099 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.099 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.100 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.100 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.100 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.100 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.101 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.101 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.102 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cinder.auth_type = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.102 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cinder.cafile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.102 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cinder.catalog_info = volumev3:cinderv3:publicURL log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.103 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cinder.certfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.103 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.103 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.103 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.104 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.104 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cinder.insecure = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.104 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.105 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.105 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.105 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] cinder.timeout = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.105 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.106 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] upgrade_levels.cells = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.106 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.106 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] upgrade_levels.compute = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.107 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.107 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] upgrade_levels.console = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.107 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] upgrade_levels.consoleauth = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.107 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] upgrade_levels.intercell = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.108 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] upgrade_levels.network = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.108 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.108 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] key_manager.backend = nova.keymgr.conf_key_mgr.ConfKeyManager log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.109 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] key_manager.fixed_key = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.109 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] osapi_v21.project_id_regex = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2912 >2018-06-28 09:59:15.109 4269 DEBUG oslo_service.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] ******************************************************************************** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2914 >2018-06-28 09:59:15.110 4269 INFO nova.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting compute node (version 18.0.0-0.20180625215857.9a8a98b.el7ost) >2018-06-28 09:59:15.984 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 09:59:15.986 4269 WARNING nova.compute.monitors [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Excluding nova.compute.monitors.cpu monitor virt_driver. Not in the list of enabled monitors (CONF.compute_monitors). >2018-06-28 09:59:15.987 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:59:15.987 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.00299191474915 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:59:15.987 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:59:15.988 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:59:16.034 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "placement_client" acquired by "nova.scheduler.client.report._create_client" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:59:16.036 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "placement_client" released by "nova.scheduler.client.report._create_client" :: held 0.002s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:59:16.654 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-cd086395-bfb8-4c42-bb66-9b88aba26f23] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-cd086395-bfb8-4c42-bb66-9b88aba26f23", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:59:16.654 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.667s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 571, in _init_compute_node >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 09:59:16.655 4269 ERROR nova.compute.manager >2018-06-28 09:59:16.658 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:59:16.659 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.674573898315 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:59:16.659 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:59:16.659 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:59:16.877 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-375a8a7d-9186-4676-8baa-9f7b98ee7816] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-375a8a7d-9186-4676-8baa-9f7b98ee7816", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:59:16.877 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.218s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 571, in _init_compute_node >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 09:59:16.878 4269 ERROR nova.compute.manager >2018-06-28 09:59:16.878 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 09:59:16.879 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.894675970078 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 09:59:16.879 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 09:59:16.879 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 09:59:16.920 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-2a5aa73f-de2d-4525-8b3f-b00c7cca475d] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-2a5aa73f-de2d-4525-8b3f-b00c7cca475d", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 09:59:16.921 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.041s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 571, in _init_compute_node >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 09:59:16.921 4269 ERROR nova.compute.manager >2018-06-28 09:59:16.922 4269 DEBUG nova.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Creating RPC server for service compute start /usr/lib/python2.7/site-packages/nova/service.py:185 >2018-06-28 09:59:16.945 4269 DEBUG nova.service [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python2.7/site-packages/nova/service.py:203 >2018-06-28 09:59:16.945 4269 DEBUG nova.servicegroup.drivers.db [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] DB_Driver: join new ServiceGroup member undercloud-0.redhat.local to the compute group, service = <Service: host=undercloud-0.redhat.local, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py:47 >2018-06-28 10:00:14.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:00:14.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:00:14.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:00:14.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:00:14.663 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:00:14.665 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:00:14.700 4269 INFO nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running instance usage audit for host undercloud-0.redhat.local from 2018-06-28 13:00:00 to 2018-06-28 14:00:00. 0 instances. >2018-06-28 10:00:14.726 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:00:14.726 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:00:14.726 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:00:14.727 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:00:14.727 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:00:14.835 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:00:14.836 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:00:14.836 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000761985778809 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:00:14.836 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:00:14.837 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:00:14.848 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-ab6b9ffc-74bb-4b61-98c5-29cddc721bc6] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-ab6b9ffc-74bb-4b61-98c5-29cddc721bc6", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:00:14.849 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.012s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:00:14.849 4269 ERROR nova.compute.manager >2018-06-28 10:00:14.850 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:00:14.850 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0153510570526 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:00:14.851 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:00:14.851 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:00:14.860 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-e1589db4-8fac-46d4-a312-374ba3ea6f5a] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-e1589db4-8fac-46d4-a312-374ba3ea6f5a", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:00:14.860 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:00:14.861 4269 ERROR nova.compute.manager >2018-06-28 10:00:14.861 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:00:14.862 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0266120433807 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:00:14.862 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:00:14.862 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:00:14.872 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-99d55192-6002-4c0c-8239-3e92c3cbc6e7] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-99d55192-6002-4c0c-8239-3e92c3cbc6e7", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:00:14.873 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:00:14.873 4269 ERROR nova.compute.manager >2018-06-28 10:00:14.874 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:00:14.874 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:01:14.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:01:14.670 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:01:14.671 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:01:14.671 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:01:14.671 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:01:14.686 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:01:15.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:01:15.673 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:01:15.673 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:01:15.674 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:01:15.674 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:01:16.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:01:16.642 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:01:16.935 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:01:16.936 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:01:16.936 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000728130340576 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:01:16.937 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:01:16.937 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:01:16.948 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-1ec8c973-4fdf-41db-869d-19b76544ae4a] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-1ec8c973-4fdf-41db-869d-19b76544ae4a", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:01:16.949 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.012s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:01:16.949 4269 ERROR nova.compute.manager >2018-06-28 10:01:16.950 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:01:16.950 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0149040222168 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:01:16.951 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:01:16.951 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:01:16.962 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-87eeec50-9307-444e-a407-1081d3d1b4eb] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-87eeec50-9307-444e-a407-1081d3d1b4eb", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:01:16.963 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.012s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:01:16.963 4269 ERROR nova.compute.manager >2018-06-28 10:01:16.964 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:01:16.964 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0284731388092 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:01:16.964 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:01:16.964 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:01:16.973 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-68ee0e84-a9dc-4024-b2d9-ce0f3f63bc68] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-68ee0e84-a9dc-4024-b2d9-ce0f3f63bc68", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:01:16.974 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:01:16.974 4269 ERROR nova.compute.manager >2018-06-28 10:01:16.975 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:02:14.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:02:14.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:02:14.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:02:14.642 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:02:14.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:02:15.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:02:16.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:02:16.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:02:16.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:02:17.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:02:17.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:02:17.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:02:18.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:02:18.738 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:02:18.738 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:02:18.738 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000771999359131 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:02:18.739 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:02:18.739 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:02:18.749 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-738b5ca1-280f-48be-a765-274d229dd42d] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-738b5ca1-280f-48be-a765-274d229dd42d", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:02:18.750 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.011s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:02:18.750 4269 ERROR nova.compute.manager >2018-06-28 10:02:18.751 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:02:18.751 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.013482093811 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:02:18.751 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:02:18.752 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:02:18.761 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-5c1183c6-7582-4b26-931b-1f7afd68ecdf] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-5c1183c6-7582-4b26-931b-1f7afd68ecdf", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:02:18.762 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:02:18.762 4269 ERROR nova.compute.manager >2018-06-28 10:02:18.763 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:02:18.763 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0254220962524 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:02:18.763 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:02:18.764 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:02:18.771 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-7f34af0b-3523-4e21-8455-ed4f4c303c24] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-7f34af0b-3523-4e21-8455-ed4f4c303c24", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:02:18.772 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:02:18.772 4269 ERROR nova.compute.manager >2018-06-28 10:03:14.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:03:14.663 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:03:15.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:03:15.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:03:15.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:03:15.654 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:03:16.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:03:16.667 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:03:17.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:03:18.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:03:18.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:03:18.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:03:18.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:03:18.742 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:03:18.742 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:03:18.743 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000725984573364 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:03:18.743 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:03:18.743 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:03:18.753 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-fff52b9c-ba71-478c-8b27-5fdd21266988] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-fff52b9c-ba71-478c-8b27-5fdd21266988", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:03:18.753 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:03:18.753 4269 ERROR nova.compute.manager >2018-06-28 10:03:18.754 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:03:18.755 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0127441883087 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:03:18.755 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:03:18.755 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:03:18.764 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-55536a75-98ee-4fbe-bb1c-111db719d6b9] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-55536a75-98ee-4fbe-bb1c-111db719d6b9", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:03:18.764 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:03:18.765 4269 ERROR nova.compute.manager >2018-06-28 10:03:18.765 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:03:18.766 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0238251686096 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:03:18.766 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:03:18.766 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:03:18.774 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-3d002ece-0856-4b55-a182-eca5ddd83841] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-3d002ece-0856-4b55-a182-eca5ddd83841", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:03:18.775 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:03:18.775 4269 ERROR nova.compute.manager >2018-06-28 10:03:19.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:04:14.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:04:14.676 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:04:14.676 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7862 >2018-06-28 10:04:14.705 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7871 >2018-06-28 10:04:14.706 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:04:14.706 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7905 >2018-06-28 10:04:16.669 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:04:16.691 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:04:16.691 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:04:16.692 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:04:16.692 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:04:16.692 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:04:16.710 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:04:17.659 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:04:18.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:04:18.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:04:18.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:04:18.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:04:19.094 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:04:19.094 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:04:19.095 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000769853591919 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:04:19.095 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:04:19.095 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:04:19.339 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-79910fb6-2c17-4914-8209-8382300ff1e8] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-79910fb6-2c17-4914-8209-8382300ff1e8", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:04:19.340 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.245s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:04:19.340 4269 ERROR nova.compute.manager >2018-06-28 10:04:19.342 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:04:19.342 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.24800491333 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:04:19.342 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:04:19.343 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:04:19.541 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-7129400c-592b-46aa-a12f-638ac625e6c5] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-7129400c-592b-46aa-a12f-638ac625e6c5", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:04:19.542 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.199s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:04:19.542 4269 ERROR nova.compute.manager >2018-06-28 10:04:19.543 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:04:19.543 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.449232816696 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:04:19.543 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:04:19.544 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:04:19.556 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-26a24eea-1dcb-4f7a-8cac-cfb9f452816a] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-26a24eea-1dcb-4f7a-8cac-cfb9f452816a", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:04:19.556 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.012s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:04:19.556 4269 ERROR nova.compute.manager >2018-06-28 10:04:21.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:05:16.644 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:05:17.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:05:17.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:05:17.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:05:17.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:05:18.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:05:18.670 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:05:18.671 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:05:19.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:05:19.673 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:05:20.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:05:20.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:05:20.642 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:05:20.744 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:05:20.744 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:05:20.745 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000724077224731 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:05:20.745 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:05:20.745 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:05:20.755 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-ec0fe65d-a0a1-4c2b-9da5-627e0c366bf9] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-ec0fe65d-a0a1-4c2b-9da5-627e0c366bf9", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:05:20.755 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:05:20.756 4269 ERROR nova.compute.manager >2018-06-28 10:05:20.756 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:05:20.756 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0125188827515 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:05:20.757 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:05:20.757 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:05:20.765 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-dcc3e952-c8a2-4138-a9b0-b594aa32fdfd] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-dcc3e952-c8a2-4138-a9b0-b594aa32fdfd", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:05:20.766 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:05:20.766 4269 ERROR nova.compute.manager >2018-06-28 10:05:20.767 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:05:20.767 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0229780673981 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:05:20.767 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:05:20.768 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:05:20.775 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-a18635b3-9cd5-4628-9144-b34f9c088f67] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-a18635b3-9cd5-4628-9144-b34f9c088f67", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:05:20.775 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:05:20.776 4269 ERROR nova.compute.manager >2018-06-28 10:05:23.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:06:17.644 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:06:17.644 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:06:17.645 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:06:17.660 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:06:18.657 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:06:20.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:06:20.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:06:20.666 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:06:20.667 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:06:20.667 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:06:21.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:06:22.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:06:22.768 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:06:22.768 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:06:22.769 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000699996948242 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:06:22.769 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:06:22.769 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:06:22.778 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-738503db-11e8-4b46-bfae-1e081471f194] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-738503db-11e8-4b46-bfae-1e081471f194", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:06:22.779 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:06:22.779 4269 ERROR nova.compute.manager >2018-06-28 10:06:22.780 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:06:22.780 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0121469497681 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:06:22.780 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:06:22.781 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:06:22.789 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-909631f4-9e1c-4028-bc5f-d6e9d3e5d74e] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-909631f4-9e1c-4028-bc5f-d6e9d3e5d74e", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:06:22.789 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:06:22.789 4269 ERROR nova.compute.manager >2018-06-28 10:06:22.790 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:06:22.790 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0223391056061 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:06:22.791 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:06:22.791 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:06:22.801 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-1d834f84-0139-42e8-bf38-7e68b0907c76] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-1d834f84-0139-42e8-bf38-7e68b0907c76", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:06:22.802 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.011s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:06:22.802 4269 ERROR nova.compute.manager >2018-06-28 10:06:24.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:07:17.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:07:17.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:07:17.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:07:17.657 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:07:19.658 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:07:20.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:07:21.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:07:21.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:07:21.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:07:22.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:07:23.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:07:23.665 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:07:24.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:07:24.782 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:07:24.782 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:07:24.783 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000715970993042 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:07:24.783 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:07:24.783 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:07:24.801 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-463773c0-2b78-4bcf-b9ad-c6818fb75720] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-463773c0-2b78-4bcf-b9ad-c6818fb75720", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:07:24.801 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.018s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:07:24.801 4269 ERROR nova.compute.manager >2018-06-28 10:07:24.802 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:07:24.802 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0204100608826 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:07:24.803 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:07:24.803 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:07:24.818 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-563d2e94-f16b-427c-9a1e-2b4b9855143f] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-563d2e94-f16b-427c-9a1e-2b4b9855143f", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:07:24.818 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.015s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:07:24.819 4269 ERROR nova.compute.manager >2018-06-28 10:07:24.819 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:07:24.819 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.03764295578 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:07:24.820 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:07:24.820 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:07:24.833 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-3d40d044-ddcd-4dec-bcc2-81b01464b912] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-3d40d044-ddcd-4dec-bcc2-81b01464b912", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:07:24.833 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.013s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:07:24.834 4269 ERROR nova.compute.manager >2018-06-28 10:07:24.834 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:08:18.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:08:18.642 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:08:18.642 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:08:18.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:08:20.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:08:21.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:08:21.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:08:21.642 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:08:22.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:08:23.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:08:24.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:08:24.740 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:08:24.740 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:08:24.741 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000613212585449 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:08:24.741 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:08:24.741 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:08:24.751 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-7fa1156e-d1a6-42d2-8f09-346bfb891014] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-7fa1156e-d1a6-42d2-8f09-346bfb891014", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:08:24.751 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:08:24.752 4269 ERROR nova.compute.manager >2018-06-28 10:08:24.752 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:08:24.753 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0126450061798 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:08:24.753 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:08:24.753 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:08:24.762 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-df0103a3-20af-44a0-a943-86e71ba7ccdf] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-df0103a3-20af-44a0-a943-86e71ba7ccdf", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:08:24.762 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:08:24.762 4269 ERROR nova.compute.manager >2018-06-28 10:08:24.763 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:08:24.763 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0230140686035 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:08:24.763 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:08:24.764 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:08:24.771 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-f072e4b3-f163-4ce6-81ae-4b68546e7beb] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-f072e4b3-f163-4ce6-81ae-4b68546e7beb", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:08:24.771 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:08:24.772 4269 ERROR nova.compute.manager >2018-06-28 10:08:24.772 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:08:26.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:09:14.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_bandwidth_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:09:14.642 4269 INFO nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Updating bandwidth usage cache >2018-06-28 10:09:14.665 4269 INFO nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Bandwidth usage not supported by ironic.IronicDriver. >2018-06-28 10:09:18.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:09:18.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7905 >2018-06-28 10:09:19.663 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:09:19.663 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:09:19.663 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:09:19.678 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:09:21.653 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:09:21.653 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:09:22.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:09:22.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:09:23.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:09:24.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:09:24.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7862 >2018-06-28 10:09:24.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7871 >2018-06-28 10:09:25.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:09:25.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:09:26.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:09:26.931 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:09:26.931 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:09:26.932 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000694990158081 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:09:26.932 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:09:26.932 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:09:27.133 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-9226d0a8-dfe9-4a81-8220-749d3b7cd273] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-9226d0a8-dfe9-4a81-8220-749d3b7cd273", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:09:27.133 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.201s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:09:27.133 4269 ERROR nova.compute.manager >2018-06-28 10:09:27.134 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:09:27.135 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.203576803207 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:09:27.135 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:09:27.135 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:09:27.340 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-34524107-560e-43f7-82d7-cafd4a68444d] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-34524107-560e-43f7-82d7-cafd4a68444d", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:09:27.341 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.206s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:09:27.341 4269 ERROR nova.compute.manager >2018-06-28 10:09:27.342 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:09:27.342 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.411251783371 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:09:27.343 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:09:27.343 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:09:27.352 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-9abc6bf5-1080-4102-a71c-9cff428ccf5b] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-9abc6bf5-1080-4102-a71c-9cff428ccf5b", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:09:27.352 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:09:27.352 4269 ERROR nova.compute.manager >2018-06-28 10:09:27.353 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:09:27.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:09:28.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:10:20.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:10:20.642 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:10:20.642 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:10:20.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:10:21.653 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:10:22.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:10:22.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:10:23.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:10:25.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:10:26.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:10:26.744 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:10:26.744 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:10:26.744 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000704050064087 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:10:26.745 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:10:26.745 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:10:26.754 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-529a7207-7c12-4a8e-b1c2-f3a4e9e614c2] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-529a7207-7c12-4a8e-b1c2-f3a4e9e614c2", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:10:26.755 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:10:26.755 4269 ERROR nova.compute.manager >2018-06-28 10:10:26.756 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:10:26.756 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0123040676117 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:10:26.756 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:10:26.757 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:10:26.764 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-49a93ee5-0472-470d-9785-0696e1504785] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-49a93ee5-0472-470d-9785-0696e1504785", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:10:26.765 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:10:26.765 4269 ERROR nova.compute.manager >2018-06-28 10:10:26.766 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:10:26.766 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0221419334412 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:10:26.766 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:10:26.766 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:10:26.774 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-b6c46838-82d6-47c5-a65c-221deb2800b5] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-b6c46838-82d6-47c5-a65c-221deb2800b5", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:10:26.774 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.007s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:10:26.774 4269 ERROR nova.compute.manager >2018-06-28 10:10:26.775 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:10:27.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:10:27.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:11:20.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:11:20.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:11:20.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:11:20.658 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:11:22.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:11:22.657 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:11:22.657 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:11:23.642 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:11:26.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:11:26.653 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:11:26.749 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:11:26.750 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:11:26.750 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000714063644409 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:11:26.750 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:11:26.751 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:11:26.760 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-fb4fa2cc-7937-4af3-a86c-2ed7dd4169e1] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-fb4fa2cc-7937-4af3-a86c-2ed7dd4169e1", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:11:26.761 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:11:26.761 4269 ERROR nova.compute.manager >2018-06-28 10:11:26.762 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:11:26.762 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0129489898682 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:11:26.762 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:11:26.763 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:11:26.772 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-3e8a2a9b-53a1-4b65-8d48-51509cd169d0] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-3e8a2a9b-53a1-4b65-8d48-51509cd169d0", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:11:26.772 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:11:26.773 4269 ERROR nova.compute.manager >2018-06-28 10:11:26.773 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:11:26.774 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0244331359863 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:11:26.774 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:11:26.774 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:11:26.783 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-9aad135d-079b-4afa-ac28-482a2e6d2de8] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-9aad135d-079b-4afa-ac28-482a2e6d2de8", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:11:26.783 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:11:26.783 4269 ERROR nova.compute.manager >2018-06-28 10:11:27.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:11:27.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:11:28.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:11:28.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:12:21.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:12:21.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:12:21.642 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:12:21.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:12:24.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:12:24.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:12:24.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:12:24.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:12:27.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:12:27.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:12:27.954 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:12:27.955 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:12:27.955 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000942945480347 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:12:27.956 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:12:27.956 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:12:27.967 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-9222cdec-b7f8-40f1-9362-e278658b1f02] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-9222cdec-b7f8-40f1-9362-e278658b1f02", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:12:27.967 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.012s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:12:27.968 4269 ERROR nova.compute.manager >2018-06-28 10:12:27.969 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:12:27.969 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0145618915558 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:12:27.969 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:12:27.970 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:12:27.979 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-752b27fd-2118-4a95-b2fb-d49b302099c1] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-752b27fd-2118-4a95-b2fb-d49b302099c1", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:12:27.979 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:12:27.979 4269 ERROR nova.compute.manager >2018-06-28 10:12:27.980 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:12:27.980 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0256590843201 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:12:27.980 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:12:27.981 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:12:27.988 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-faac52e4-4895-4447-aa08-ab79545d5811] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-faac52e4-4895-4447-aa08-ab79545d5811", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:12:27.988 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:12:27.988 4269 ERROR nova.compute.manager >2018-06-28 10:12:28.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:12:28.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:12:29.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:13:23.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:13:23.642 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:13:23.642 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:13:23.657 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:13:24.658 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:13:24.658 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:13:25.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:13:26.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:13:28.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:13:29.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:13:29.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:13:29.748 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:13:29.748 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:13:29.749 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000744104385376 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:13:29.749 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:13:29.749 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:13:29.760 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-a9691ed0-7a9a-4e50-8e3e-071753ba443d] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-a9691ed0-7a9a-4e50-8e3e-071753ba443d", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:13:29.760 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.011s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:13:29.761 4269 ERROR nova.compute.manager >2018-06-28 10:13:29.762 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:13:29.762 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0138020515442 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:13:29.762 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:13:29.762 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:13:29.772 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-ac370ed1-824b-42f2-b593-864879fb8813] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-ac370ed1-824b-42f2-b593-864879fb8813", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:13:29.772 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:13:29.772 4269 ERROR nova.compute.manager >2018-06-28 10:13:29.773 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:13:29.773 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0251660346985 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:13:29.774 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:13:29.774 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:13:29.782 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-214dcdaa-dbd9-4a38-b3d2-d5ea7fe378c3] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-214dcdaa-dbd9-4a38-b3d2-d5ea7fe378c3", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:13:29.782 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:13:29.782 4269 ERROR nova.compute.manager >2018-06-28 10:13:29.783 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:13:30.772 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:13:33.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:14:24.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:14:24.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:14:24.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:14:24.669 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:14:25.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:14:25.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:14:25.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:14:25.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7862 >2018-06-28 10:14:25.670 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7871 >2018-06-28 10:14:26.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:14:27.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:14:28.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:14:28.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7905 >2018-06-28 10:14:29.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:14:30.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:14:31.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:14:31.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:14:31.737 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:14:31.737 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:14:31.738 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000699996948242 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:14:31.738 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:14:31.738 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:14:31.942 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-43f90522-0a35-44a6-9e5d-cac2f334f76e] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-43f90522-0a35-44a6-9e5d-cac2f334f76e", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:14:31.943 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.204s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:14:31.943 4269 ERROR nova.compute.manager >2018-06-28 10:14:31.944 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:14:31.944 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.207099199295 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:14:31.944 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:14:31.945 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:14:32.146 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-b5e2ddf1-8274-4941-af8b-b8bf096de570] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-b5e2ddf1-8274-4941-af8b-b8bf096de570", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:14:32.146 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.201s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:14:32.147 4269 ERROR nova.compute.manager >2018-06-28 10:14:32.147 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:14:32.147 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.410493135452 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:14:32.148 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:14:32.148 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:14:32.157 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-e28fe4bc-0169-4e5f-bd31-c38a9560a604] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-e28fe4bc-0169-4e5f-bd31-c38a9560a604", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:14:32.157 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:14:32.158 4269 ERROR nova.compute.manager >2018-06-28 10:14:32.158 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:14:38.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:15:26.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:15:26.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:15:26.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:15:26.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:15:26.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:15:26.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:15:28.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:15:28.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:15:31.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:15:31.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:15:31.747 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:15:31.747 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:15:31.747 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.00065803527832 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:15:31.748 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:15:31.748 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:15:31.757 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-860053b1-1321-4dc0-b569-f77ee8551337] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-860053b1-1321-4dc0-b569-f77ee8551337", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:15:31.758 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:15:31.758 4269 ERROR nova.compute.manager >2018-06-28 10:15:31.759 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:15:31.759 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0122690200806 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:15:31.759 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:15:31.759 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:15:31.769 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-927fb486-32e5-402f-8950-2a0bfde27bd3] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-927fb486-32e5-402f-8950-2a0bfde27bd3", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:15:31.769 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:15:31.770 4269 ERROR nova.compute.manager >2018-06-28 10:15:31.770 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:15:31.770 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0239179134369 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:15:31.771 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:15:31.771 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:15:31.779 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-cf735963-f698-45f6-babb-bc65481f754c] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-cf735963-f698-45f6-babb-bc65481f754c", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:15:31.780 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:15:31.780 4269 ERROR nova.compute.manager >2018-06-28 10:15:31.780 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:15:32.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:15:33.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:15:36.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:16:27.657 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:16:27.658 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:16:27.658 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:16:27.671 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:16:28.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:16:28.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:16:30.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:16:30.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:16:31.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:16:31.741 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:16:31.741 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:16:31.741 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000723123550415 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:16:31.742 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:16:31.742 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:16:31.751 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-984e6d63-d43f-4757-8597-e0bc1d97caf8] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-984e6d63-d43f-4757-8597-e0bc1d97caf8", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:16:31.752 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:16:31.752 4269 ERROR nova.compute.manager >2018-06-28 10:16:31.752 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:16:31.753 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0121002197266 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:16:31.753 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:16:31.753 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:16:31.762 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-b160e817-7be5-4dd2-a7da-78017c98e57f] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-b160e817-7be5-4dd2-a7da-78017c98e57f", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:16:31.762 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:16:31.763 4269 ERROR nova.compute.manager >2018-06-28 10:16:31.763 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:16:31.764 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0228850841522 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:16:31.764 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:16:31.764 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:16:31.772 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-d196cd93-d913-447d-a2c3-0c017971672a] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-d196cd93-d913-447d-a2c3-0c017971672a", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:16:31.772 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:16:31.772 4269 ERROR nova.compute.manager >2018-06-28 10:16:32.773 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:16:33.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:16:33.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:16:34.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:17:28.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:17:28.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:17:28.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:17:28.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:17:29.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:17:29.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:17:30.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:17:30.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:17:31.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:17:31.924 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:17:31.924 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:17:31.924 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000707864761353 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:17:31.925 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:17:31.925 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:17:31.934 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-e5dcdc5a-2104-4068-bef2-4067ede546ee] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-e5dcdc5a-2104-4068-bef2-4067ede546ee", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:17:31.935 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:17:31.935 4269 ERROR nova.compute.manager >2018-06-28 10:17:31.936 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:17:31.936 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0123178958893 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:17:31.936 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:17:31.936 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:17:31.944 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-16163a1f-1303-466e-9ee9-454e4fc16241] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-16163a1f-1303-466e-9ee9-454e4fc16241", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:17:31.945 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:17:31.945 4269 ERROR nova.compute.manager >2018-06-28 10:17:31.946 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:17:31.946 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0223050117493 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:17:31.946 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:17:31.946 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:17:31.954 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-19c01dc3-b119-40e3-a3b5-f3ad4ef58b8d] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-19c01dc3-b119-40e3-a3b5-f3ad4ef58b8d", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:17:31.954 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:17:31.955 4269 ERROR nova.compute.manager >2018-06-28 10:17:34.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:17:34.651 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:17:34.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:17:35.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:17:37.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:18:28.657 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:18:28.657 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:18:28.658 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:18:28.671 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:18:30.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:18:30.653 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:18:31.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:18:31.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:18:33.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:18:33.733 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:18:33.733 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:18:33.733 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000623941421509 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:18:33.734 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:18:33.734 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:18:33.743 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-9b419b9b-93d9-472a-8036-d834e5639236] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-9b419b9b-93d9-472a-8036-d834e5639236", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:18:33.743 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:18:33.744 4269 ERROR nova.compute.manager >2018-06-28 10:18:33.744 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:18:33.745 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0117139816284 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:18:33.745 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:18:33.745 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:18:33.753 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-aac9ed06-f5e8-4164-bc04-5c0cebde8302] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-aac9ed06-f5e8-4164-bc04-5c0cebde8302", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:18:33.754 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:18:33.754 4269 ERROR nova.compute.manager >2018-06-28 10:18:33.755 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:18:33.755 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0222170352936 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:18:33.755 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:18:33.756 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:18:33.763 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-998f95e3-1960-46f0-af57-5268e535eba8] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-998f95e3-1960-46f0-af57-5268e535eba8", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:18:33.764 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:18:33.764 4269 ERROR nova.compute.manager >2018-06-28 10:18:34.764 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:18:35.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:18:36.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:18:36.651 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:19:26.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_bandwidth_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:19:26.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:19:26.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7862 >2018-06-28 10:19:26.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7871 >2018-06-28 10:19:28.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:19:28.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:19:28.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:19:28.653 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:19:31.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:19:31.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7905 >2018-06-28 10:19:32.649 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:19:32.650 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:19:33.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:19:33.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:19:34.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:19:34.918 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:19:34.918 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:19:34.918 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000698089599609 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:19:34.919 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:19:34.919 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:19:35.109 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-1d0df18e-a9c7-4694-bb05-cbd2fb088e7b] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-1d0df18e-a9c7-4694-bb05-cbd2fb088e7b", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:19:35.110 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.191s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:19:35.110 4269 ERROR nova.compute.manager >2018-06-28 10:19:35.110 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:19:35.111 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.193235874176 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:19:35.111 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:19:35.111 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:19:35.283 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-180e0123-d07f-427a-9b8d-362ae88dbb32] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-180e0123-d07f-427a-9b8d-362ae88dbb32", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:19:35.284 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.172s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:19:35.284 4269 ERROR nova.compute.manager >2018-06-28 10:19:35.284 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:19:35.285 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.367175102234 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:19:35.285 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:19:35.285 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:19:35.294 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-c5ed4f2b-dcfb-4b3d-8b80-e48a232410db] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-c5ed4f2b-dcfb-4b3d-8b80-e48a232410db", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:19:35.294 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:19:35.294 4269 ERROR nova.compute.manager >2018-06-28 10:19:36.294 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:19:36.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:19:36.653 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:19:37.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:19:38.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:19:48.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:20:28.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:20:28.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:20:28.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:20:28.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:20:32.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:20:34.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:20:34.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:20:34.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:20:35.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:20:35.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:20:35.737 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:20:35.737 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:20:35.737 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000656843185425 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:20:35.738 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:20:35.738 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:20:35.748 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-c1c25e73-ea55-45b4-8607-4e4f9de50d2d] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-c1c25e73-ea55-45b4-8607-4e4f9de50d2d", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:20:35.748 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:20:35.749 4269 ERROR nova.compute.manager >2018-06-28 10:20:35.749 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:20:35.749 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0126528739929 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:20:35.750 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:20:35.750 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:20:35.759 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-972803b0-e495-4b56-8763-b367253bddcb] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-972803b0-e495-4b56-8763-b367253bddcb", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:20:35.759 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:20:35.760 4269 ERROR nova.compute.manager >2018-06-28 10:20:35.760 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:20:35.760 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0236339569092 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:20:35.761 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:20:35.761 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:20:35.769 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-6fd340df-6ae4-45ab-8224-99c2056a6c27] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-6fd340df-6ae4-45ab-8224-99c2056a6c27", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:20:35.770 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:20:35.770 4269 ERROR nova.compute.manager >2018-06-28 10:20:36.770 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:20:37.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:20:38.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:21:29.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:21:29.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:21:29.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:21:29.654 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:21:32.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:21:34.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:21:34.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:21:35.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:21:35.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:21:35.739 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:21:35.739 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:21:35.739 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000693082809448 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:21:35.740 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:21:35.740 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:21:35.749 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-34033990-74a1-4c8b-926c-c7357b010b0d] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-34033990-74a1-4c8b-926c-c7357b010b0d", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:21:35.749 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:21:35.750 4269 ERROR nova.compute.manager >2018-06-28 10:21:35.750 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:21:35.751 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0120580196381 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:21:35.751 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:21:35.751 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:21:35.759 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-74149888-5e6a-40b8-adec-82ac29ea19ee] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-74149888-5e6a-40b8-adec-82ac29ea19ee", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:21:35.760 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:21:35.760 4269 ERROR nova.compute.manager >2018-06-28 10:21:35.761 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:21:35.761 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0223760604858 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:21:35.761 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:21:35.762 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:21:35.769 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-3c3d91f4-048b-43af-894d-a43f189365e0] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-3c3d91f4-048b-43af-894d-a43f189365e0", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:21:35.769 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.007s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:21:35.769 4269 ERROR nova.compute.manager >2018-06-28 10:21:36.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:21:37.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:21:37.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:21:39.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:21:42.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:22:31.657 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:22:31.658 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:22:31.658 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:22:31.674 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:22:33.658 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:22:36.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:22:36.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:22:37.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:22:37.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:22:37.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:22:37.988 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:22:37.988 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:22:37.988 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000744819641113 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:22:37.989 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:22:37.989 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:22:38.001 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-10b976aa-c5ae-4a01-9c59-da0e0807eabc] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-10b976aa-c5ae-4a01-9c59-da0e0807eabc", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:22:38.002 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.012s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:22:38.002 4269 ERROR nova.compute.manager >2018-06-28 10:22:38.003 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:22:38.003 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0154738426208 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:22:38.004 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:22:38.004 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:22:38.014 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-58a02774-c9ad-4971-9fca-23cc58d01e3d] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-58a02774-c9ad-4971-9fca-23cc58d01e3d", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:22:38.014 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:22:38.015 4269 ERROR nova.compute.manager >2018-06-28 10:22:38.015 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:22:38.016 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0277988910675 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:22:38.016 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:22:38.016 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:22:38.025 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-23aedf3a-39a9-4ac8-8e76-e8d6eb26ca7a] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-23aedf3a-39a9-4ac8-8e76-e8d6eb26ca7a", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:22:38.026 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:22:38.026 4269 ERROR nova.compute.manager >2018-06-28 10:22:38.027 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:22:39.026 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:22:39.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:23:32.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:23:32.642 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:23:32.642 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:23:32.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:23:34.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:23:37.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:23:37.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:23:38.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:23:38.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:23:39.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:23:39.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:23:39.734 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:23:39.734 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:23:39.734 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000808000564575 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:23:39.735 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:23:39.735 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:23:39.744 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-5a0e6ee2-83f5-4226-af2f-904637d71ea9] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-5a0e6ee2-83f5-4226-af2f-904637d71ea9", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:23:39.745 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:23:39.745 4269 ERROR nova.compute.manager >2018-06-28 10:23:39.746 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:23:39.746 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0122649669647 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:23:39.746 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:23:39.747 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:23:39.755 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-061ee646-149a-4356-af2e-9c3656fab5ad] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-061ee646-149a-4356-af2e-9c3656fab5ad", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:23:39.755 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:23:39.755 4269 ERROR nova.compute.manager >2018-06-28 10:23:39.756 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:23:39.756 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0224227905273 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:23:39.756 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:23:39.757 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:23:39.764 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-9cba6824-190e-4b57-ba68-1572ee1b25e8] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-9cba6824-190e-4b57-ba68-1572ee1b25e8", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:23:39.764 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:23:39.765 4269 ERROR nova.compute.manager >2018-06-28 10:23:39.765 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:23:40.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:23:45.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:24:30.660 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:24:30.661 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7862 >2018-06-28 10:24:30.675 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7871 >2018-06-28 10:24:34.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:24:34.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:24:34.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:24:34.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:24:36.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:24:38.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:24:38.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:24:39.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:24:39.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:24:39.733 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:24:39.733 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:24:39.734 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000767946243286 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:24:39.734 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:24:39.734 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:24:39.931 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-e1baedce-947e-4fd5-9c0e-44ec8af8bf81] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-e1baedce-947e-4fd5-9c0e-44ec8af8bf81", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:24:39.932 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.198s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:24:39.932 4269 ERROR nova.compute.manager >2018-06-28 10:24:39.933 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:24:39.933 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.200094938278 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:24:39.933 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:24:39.934 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:24:40.127 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-508357e2-d5f6-4c28-b8dc-e13b9e7f7728] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-508357e2-d5f6-4c28-b8dc-e13b9e7f7728", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:24:40.128 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.194s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:24:40.128 4269 ERROR nova.compute.manager >2018-06-28 10:24:40.129 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:24:40.129 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.3960750103 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:24:40.129 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:24:40.130 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:24:40.139 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-bafbd47d-8e9b-41e3-a6a9-c69a178bf932] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-bafbd47d-8e9b-41e3-a6a9-c69a178bf932", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:24:40.139 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:24:40.139 4269 ERROR nova.compute.manager >2018-06-28 10:24:40.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:24:40.651 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:24:40.651 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:24:41.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:24:42.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:24:42.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7905 >2018-06-28 10:24:57.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:25:35.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:25:35.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:25:35.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:25:35.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:25:37.657 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:25:38.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:25:38.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:25:39.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:25:39.738 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:25:39.739 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:25:39.739 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000623226165771 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:25:39.739 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:25:39.740 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:25:39.749 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-611133f8-1b1c-4157-882e-101f07eab15a] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-611133f8-1b1c-4157-882e-101f07eab15a", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:25:39.749 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:25:39.750 4269 ERROR nova.compute.manager >2018-06-28 10:25:39.751 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:25:39.751 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0125432014465 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:25:39.751 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:25:39.751 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:25:39.760 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-5b901d6e-8b3d-47d8-83fc-30dfcc5b4d46] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-5b901d6e-8b3d-47d8-83fc-30dfcc5b4d46", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:25:39.760 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:25:39.760 4269 ERROR nova.compute.manager >2018-06-28 10:25:39.761 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:25:39.761 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.022693157196 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:25:39.761 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:25:39.762 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:25:39.769 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-79cc789d-c9e0-4fdc-89ca-3661483f0546] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-79cc789d-c9e0-4fdc-89ca-3661483f0546", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:25:39.769 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.007s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:25:39.769 4269 ERROR nova.compute.manager >2018-06-28 10:25:40.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:25:40.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:25:40.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:25:41.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:25:41.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:25:49.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:26:37.657 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:26:37.659 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:26:37.659 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:26:37.673 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:26:38.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:26:38.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:26:38.657 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:26:40.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:26:40.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:26:41.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:26:41.736 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:26:41.736 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:26:41.736 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000649929046631 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:26:41.737 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:26:41.737 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:26:41.747 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-b389e3d2-3f20-459f-8cbb-7b07a0d51156] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-b389e3d2-3f20-459f-8cbb-7b07a0d51156", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:26:41.747 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:26:41.747 4269 ERROR nova.compute.manager >2018-06-28 10:26:41.748 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:26:41.748 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.012580871582 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:26:41.749 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:26:41.749 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:26:41.757 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-2dd6bfb5-9df1-4650-b75c-003241c40a0b] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-2dd6bfb5-9df1-4650-b75c-003241c40a0b", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:26:41.757 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:26:41.757 4269 ERROR nova.compute.manager >2018-06-28 10:26:41.758 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:26:41.758 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0226068496704 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:26:41.759 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:26:41.759 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:26:41.767 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-25f4fe06-ff18-44f6-8e06-a45e0b580331] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-25f4fe06-ff18-44f6-8e06-a45e0b580331", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:26:41.767 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:26:41.768 4269 ERROR nova.compute.manager >2018-06-28 10:26:42.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:26:42.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:26:42.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:27:37.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:27:37.642 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:27:37.642 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:27:37.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:27:40.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:27:40.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:27:40.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:27:41.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:27:42.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:27:42.923 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:27:42.923 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:27:42.924 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000827074050903 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:27:42.924 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:27:42.924 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:27:42.934 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-b8b4a7fe-8a90-4d20-9ec0-f158c5e360d6] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-b8b4a7fe-8a90-4d20-9ec0-f158c5e360d6", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:27:42.934 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:27:42.934 4269 ERROR nova.compute.manager >2018-06-28 10:27:42.935 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:27:42.935 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0126509666443 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:27:42.936 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:27:42.936 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:27:42.944 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-79575e33-a71d-4c90-9a9e-c46d65fef51f] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-79575e33-a71d-4c90-9a9e-c46d65fef51f", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:27:42.945 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:27:42.945 4269 ERROR nova.compute.manager >2018-06-28 10:27:42.945 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:27:42.946 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0228300094604 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:27:42.946 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:27:42.946 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:27:42.953 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-685b003d-7545-4653-bb2c-d111f2302919] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-685b003d-7545-4653-bb2c-d111f2302919", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:27:42.954 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:27:42.954 4269 ERROR nova.compute.manager >2018-06-28 10:27:42.955 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:27:43.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:27:44.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:27:44.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:27:53.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:28:39.658 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:28:39.659 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:28:39.659 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:28:39.673 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:28:40.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:28:40.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:28:40.657 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:28:42.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:28:43.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:28:44.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:28:44.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:28:44.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:28:44.734 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:28:44.735 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:28:44.735 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000686168670654 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:28:44.735 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:28:44.736 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:28:44.745 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-1175c5f3-1f0f-4f61-913a-e013cec00a44] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-1175c5f3-1f0f-4f61-913a-e013cec00a44", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:28:44.745 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:28:44.746 4269 ERROR nova.compute.manager >2018-06-28 10:28:44.746 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:28:44.747 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0123000144958 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:28:44.747 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:28:44.747 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:28:44.755 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-cfdeb6b5-b532-4d6d-b209-2043459caef4] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-cfdeb6b5-b532-4d6d-b209-2043459caef4", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:28:44.755 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:28:44.756 4269 ERROR nova.compute.manager >2018-06-28 10:28:44.756 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:28:44.756 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0222301483154 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:28:44.757 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:28:44.757 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:28:44.765 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-354539ea-d809-4faa-9a8c-9625ff2cf2af] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-354539ea-d809-4faa-9a8c-9625ff2cf2af", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:28:44.765 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:28:44.765 4269 ERROR nova.compute.manager >2018-06-28 10:28:44.766 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:29:14.766 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:29:32.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:29:32.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7862 >2018-06-28 10:29:32.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7871 >2018-06-28 10:29:36.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_bandwidth_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:29:40.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:29:40.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:29:40.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:29:40.654 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:29:41.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:29:41.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:29:41.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:29:43.642 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:29:44.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:29:45.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:29:45.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:29:46.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:29:46.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:29:46.731 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:29:46.732 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:29:46.732 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000591993331909 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:29:46.732 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:29:46.732 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:29:46.921 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-22a9900e-3c7a-4119-b900-13f3c7eea2dc] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-22a9900e-3c7a-4119-b900-13f3c7eea2dc", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:29:46.922 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.189s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:29:46.922 4269 ERROR nova.compute.manager >2018-06-28 10:29:46.922 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:29:46.923 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.191517829895 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:29:46.923 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:29:46.923 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:29:47.121 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-87c83c50-5ee2-4d80-bf72-cfd156325e6c] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-87c83c50-5ee2-4d80-bf72-cfd156325e6c", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:29:47.122 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.198s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:29:47.122 4269 ERROR nova.compute.manager >2018-06-28 10:29:47.123 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:29:47.123 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.391699790955 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:29:47.123 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:29:47.124 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:29:47.132 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-413c25eb-0eea-455a-9388-cd9b97dfec99] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-413c25eb-0eea-455a-9388-cd9b97dfec99", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:29:47.132 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:29:47.132 4269 ERROR nova.compute.manager >2018-06-28 10:29:53.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:29:53.653 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:29:53.653 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7905 >2018-06-28 10:30:07.669 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:30:40.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:30:40.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:30:40.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:30:40.654 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:30:42.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:30:42.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:30:42.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:30:45.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:30:45.651 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:30:45.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:30:46.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:30:46.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:30:48.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:30:48.739 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:30:48.740 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:30:48.740 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000663995742798 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:30:48.740 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:30:48.740 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:30:48.750 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-d40da8fd-efe9-4c7a-9999-4f5fc9868b03] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-d40da8fd-efe9-4c7a-9999-4f5fc9868b03", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:30:48.751 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:30:48.751 4269 ERROR nova.compute.manager >2018-06-28 10:30:48.752 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:30:48.752 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0127601623535 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:30:48.752 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:30:48.753 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:30:48.761 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-f769ca9e-97ad-4355-9952-201c35808252] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-f769ca9e-97ad-4355-9952-201c35808252", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:30:48.761 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:30:48.761 4269 ERROR nova.compute.manager >2018-06-28 10:30:48.762 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:30:48.762 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0229520797729 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:30:48.762 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:30:48.763 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:30:48.770 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-6fbe88cc-a203-4233-8336-dd9a5a62a366] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-6fbe88cc-a203-4233-8336-dd9a5a62a366", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:30:48.771 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:30:48.771 4269 ERROR nova.compute.manager >2018-06-28 10:31:41.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:31:41.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:31:41.642 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:31:41.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:31:42.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:31:44.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:31:44.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:31:46.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:31:46.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:31:46.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:31:47.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:31:47.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:31:48.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:31:48.733 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:31:48.734 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:31:48.734 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000591993331909 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:31:48.734 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:31:48.734 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:31:48.743 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-05966d9b-3c34-4c3a-a4ec-ef0735521430] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-05966d9b-3c34-4c3a-a4ec-ef0735521430", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:31:48.744 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:31:48.744 4269 ERROR nova.compute.manager >2018-06-28 10:31:48.745 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:31:48.745 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0115969181061 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:31:48.745 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:31:48.745 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:31:48.754 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-2f7db268-3b21-429f-947e-86c382bae4f6] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-2f7db268-3b21-429f-947e-86c382bae4f6", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:31:48.754 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:31:48.754 4269 ERROR nova.compute.manager >2018-06-28 10:31:48.755 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:31:48.755 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0218088626862 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:31:48.755 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:31:48.756 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:31:48.763 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-8df0cb04-6ac7-4664-bdf3-5f8bfaa1d1a9] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-8df0cb04-6ac7-4664-bdf3-5f8bfaa1d1a9", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:31:48.763 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:31:48.763 4269 ERROR nova.compute.manager >2018-06-28 10:31:55.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:32:42.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:32:42.657 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:32:42.657 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:32:42.670 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:32:44.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:32:46.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:32:46.651 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:32:46.651 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:32:46.652 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:32:47.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:32:48.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:32:48.939 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:32:48.940 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:32:48.940 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000669002532959 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:32:48.940 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:32:48.941 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:32:48.950 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-81d2b9cf-0096-416c-bf9c-97397eac8059] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-81d2b9cf-0096-416c-bf9c-97397eac8059", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:32:48.950 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:32:48.951 4269 ERROR nova.compute.manager >2018-06-28 10:32:48.952 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:32:48.952 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.012521982193 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:32:48.952 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:32:48.952 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:32:48.961 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-8832ccd2-1eca-48c7-86d3-eafd9a53a56b] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-8832ccd2-1eca-48c7-86d3-eafd9a53a56b", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:32:48.961 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:32:48.961 4269 ERROR nova.compute.manager >2018-06-28 10:32:48.962 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:32:48.962 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0229299068451 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:32:48.962 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:32:48.963 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:32:48.970 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-1d7a5d48-cc04-49c0-bd9a-92c1d4b99223] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-1d7a5d48-cc04-49c0-bd9a-92c1d4b99223", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:32:48.970 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:32:48.971 4269 ERROR nova.compute.manager >2018-06-28 10:32:48.971 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:32:49.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:33:42.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:33:42.642 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:33:42.642 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:33:42.657 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:33:46.658 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:33:47.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:33:47.651 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:33:47.651 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:33:47.652 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:33:48.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:33:48.737 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:33:48.737 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:33:48.738 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000659942626953 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:33:48.738 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:33:48.738 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:33:48.748 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-a447fb8f-c071-4618-b644-9b3d2a958945] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-a447fb8f-c071-4618-b644-9b3d2a958945", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:33:48.748 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:33:48.749 4269 ERROR nova.compute.manager >2018-06-28 10:33:48.749 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:33:48.750 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0126688480377 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:33:48.750 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:33:48.750 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:33:48.759 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-2e432925-e28e-45af-95e4-577acb05c4ea] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-2e432925-e28e-45af-95e4-577acb05c4ea", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:33:48.759 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:33:48.759 4269 ERROR nova.compute.manager >2018-06-28 10:33:48.760 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:33:48.760 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0229868888855 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:33:48.760 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:33:48.760 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:33:48.768 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-8c7673b8-5c2a-4e16-8569-fed354969199] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-8c7673b8-5c2a-4e16-8569-fed354969199", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:33:48.768 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.007s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:33:48.768 4269 ERROR nova.compute.manager >2018-06-28 10:33:48.769 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:33:49.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:33:49.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:33:58.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:34:35.659 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:34:35.660 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7862 >2018-06-28 10:34:35.674 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7871 >2018-06-28 10:34:44.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:34:44.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:34:44.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:34:44.654 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:34:46.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:34:47.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:34:47.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:34:48.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:34:48.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:34:49.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:34:50.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:34:50.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:34:50.926 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:34:50.926 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:34:50.926 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000692129135132 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:34:50.927 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:34:50.927 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:34:51.122 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-2678c82b-9066-4afa-81b8-b9e83e68acec] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-2678c82b-9066-4afa-81b8-b9e83e68acec", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:34:51.122 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.195s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:34:51.123 4269 ERROR nova.compute.manager >2018-06-28 10:34:51.123 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:34:51.124 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.197824001312 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:34:51.124 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:34:51.124 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:34:51.299 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-8ad6bfd0-ac76-43fd-be8b-5bfdb6474b60] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-8ad6bfd0-ac76-43fd-be8b-5bfdb6474b60", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:34:51.300 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.176s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:34:51.300 4269 ERROR nova.compute.manager >2018-06-28 10:34:51.301 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:34:51.301 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.375224113464 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:34:51.301 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:34:51.302 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:34:51.310 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-fdd394e8-4ec2-43a2-a304-cfabe2fb923c] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-fdd394e8-4ec2-43a2-a304-cfabe2fb923c", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:34:51.311 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:34:51.311 4269 ERROR nova.compute.manager >2018-06-28 10:34:51.312 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:34:57.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:34:57.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7905 >2018-06-28 10:35:08.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:35:44.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:35:44.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:35:44.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:35:44.654 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:35:46.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:35:47.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:35:47.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:35:48.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:35:50.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:35:51.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:35:51.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:35:52.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:35:52.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:35:52.731 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:35:52.731 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:35:52.732 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000670194625854 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:35:52.732 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:35:52.732 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:35:52.742 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-852c39b3-512e-4171-88a9-112443ecaf8f] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-852c39b3-512e-4171-88a9-112443ecaf8f", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:35:52.743 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:35:52.743 4269 ERROR nova.compute.manager >2018-06-28 10:35:52.744 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:35:52.744 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0129699707031 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:35:52.744 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:35:52.745 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:35:52.752 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-0a6c5463-c2d9-41ee-9b0d-5442d7485f16] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-0a6c5463-c2d9-41ee-9b0d-5442d7485f16", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:35:52.753 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:35:52.753 4269 ERROR nova.compute.manager >2018-06-28 10:35:52.754 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:35:52.754 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0227839946747 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:35:52.754 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:35:52.754 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:35:52.762 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-189442dc-7cfc-42f4-8e7d-fe4395d54685] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-189442dc-7cfc-42f4-8e7d-fe4395d54685", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:35:52.762 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:35:52.762 4269 ERROR nova.compute.manager >2018-06-28 10:36:02.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:36:46.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:36:46.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:36:46.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:36:46.669 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:36:47.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:36:47.657 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:36:47.657 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:36:48.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:36:51.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:36:51.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:36:52.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:36:53.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:36:54.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:36:54.737 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:36:54.737 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:36:54.737 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000664949417114 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:36:54.738 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:36:54.738 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:36:54.747 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-098b6f39-e174-44cd-b6dc-4cc95c67e4ea] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-098b6f39-e174-44cd-b6dc-4cc95c67e4ea", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:36:54.748 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:36:54.748 4269 ERROR nova.compute.manager >2018-06-28 10:36:54.749 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:36:54.749 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.012542963028 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:36:54.749 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:36:54.750 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:36:54.758 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-67185c79-5134-429b-b250-126c3ad20645] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-67185c79-5134-429b-b250-126c3ad20645", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:36:54.758 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:36:54.759 4269 ERROR nova.compute.manager >2018-06-28 10:36:54.759 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:36:54.759 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0227069854736 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:36:54.760 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:36:54.760 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:36:54.767 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-cb580e5b-a2a8-495c-9690-956dd6e4e357] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-cb580e5b-a2a8-495c-9690-956dd6e4e357", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:36:54.767 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.007s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:36:54.767 4269 ERROR nova.compute.manager >2018-06-28 10:37:46.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:37:46.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:37:46.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:37:46.653 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:37:48.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:37:48.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:37:48.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:37:50.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:37:52.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:37:53.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:37:54.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:37:55.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:37:55.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:37:55.733 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:37:55.733 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:37:55.733 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000591993331909 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:37:55.734 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:37:55.734 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:37:55.743 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-64386ec9-bcd2-4a4d-9a5e-b9066b9ffbb7] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-64386ec9-bcd2-4a4d-9a5e-b9066b9ffbb7", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:37:55.744 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:37:55.744 4269 ERROR nova.compute.manager >2018-06-28 10:37:55.745 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:37:55.745 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0121269226074 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:37:55.745 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:37:55.745 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:37:55.754 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-f7be31b1-53a7-42e6-be56-9254f0d7bfa5] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-f7be31b1-53a7-42e6-be56-9254f0d7bfa5", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:37:55.754 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:37:55.755 4269 ERROR nova.compute.manager >2018-06-28 10:37:55.755 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:37:55.755 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0227899551392 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:37:55.756 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:37:55.756 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:37:55.764 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-7edea184-8251-4212-ab83-dc8c2c3399dd] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-7edea184-8251-4212-ab83-dc8c2c3399dd", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:37:55.764 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:37:55.765 4269 ERROR nova.compute.manager >2018-06-28 10:38:04.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:38:47.657 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:38:47.659 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:38:47.659 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:38:47.673 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:38:49.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:38:49.657 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:38:50.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:38:50.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:38:52.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:38:54.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:38:55.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:38:56.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:38:56.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:38:56.929 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:38:56.930 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:38:56.930 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000668048858643 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:38:56.930 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:38:56.930 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:38:56.940 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-2648e601-070f-4c3e-a16a-b071f3ce5960] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-2648e601-070f-4c3e-a16a-b071f3ce5960", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:38:56.940 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:38:56.940 4269 ERROR nova.compute.manager >2018-06-28 10:38:56.941 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:38:56.941 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0122289657593 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:38:56.942 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:38:56.942 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:38:56.950 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-6b0438bd-2ad2-4594-b8c5-78ebfa3cf2de] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-6b0438bd-2ad2-4594-b8c5-78ebfa3cf2de", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:38:56.950 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:38:56.951 4269 ERROR nova.compute.manager >2018-06-28 10:38:56.951 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:38:56.952 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0224599838257 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:38:56.952 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:38:56.952 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:38:56.960 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-d31c11e3-d4a2-427f-a4a7-6404c0bf04ce] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-d31c11e3-d4a2-427f-a4a7-6404c0bf04ce", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:38:56.960 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:38:56.960 4269 ERROR nova.compute.manager >2018-06-28 10:39:14.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:39:38.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:39:38.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7862 >2018-06-28 10:39:38.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7871 >2018-06-28 10:39:48.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:39:48.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:39:48.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:39:48.654 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:39:50.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:39:51.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:39:51.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:39:52.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:39:52.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:39:52.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_bandwidth_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:39:54.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:39:55.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:39:57.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:39:57.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:39:57.739 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:39:57.739 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:39:57.739 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000634908676147 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:39:57.740 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:39:57.740 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:39:57.940 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-9b352837-50ca-403f-9a08-1c9273cb8d1f] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-9b352837-50ca-403f-9a08-1c9273cb8d1f", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:39:57.940 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.200s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:39:57.940 4269 ERROR nova.compute.manager >2018-06-28 10:39:57.941 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:39:57.941 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.202736854553 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:39:57.942 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:39:57.942 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:39:58.137 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-c3b923e8-4708-4f0e-acbd-f8be9b41bb35] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-c3b923e8-4708-4f0e-acbd-f8be9b41bb35", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:39:58.138 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.196s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:39:58.138 4269 ERROR nova.compute.manager >2018-06-28 10:39:58.139 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:39:58.139 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.400147914886 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:39:58.139 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:39:58.139 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:39:58.148 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-059aed42-77d2-4983-ba08-13e1a6e53a29] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-059aed42-77d2-4983-ba08-13e1a6e53a29", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:39:58.148 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:39:58.148 4269 ERROR nova.compute.manager >2018-06-28 10:40:07.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:40:07.653 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:40:07.653 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7905 >2018-06-28 10:40:12.669 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:40:48.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:40:48.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:40:48.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:40:48.654 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:40:52.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:40:52.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:40:53.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:40:53.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:40:53.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:40:56.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:40:56.651 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:40:57.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:40:59.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:40:59.932 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:40:59.932 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:40:59.932 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000776052474976 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:40:59.933 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:40:59.933 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:40:59.943 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-c1919135-777c-40c2-b749-d22d613536aa] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-c1919135-777c-40c2-b749-d22d613536aa", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:40:59.943 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:40:59.943 4269 ERROR nova.compute.manager >2018-06-28 10:40:59.944 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:40:59.944 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0123629570007 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:40:59.944 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:40:59.945 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:40:59.953 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-21d31ca3-2c19-4acf-9400-aeae598399d2] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-21d31ca3-2c19-4acf-9400-aeae598399d2", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:40:59.954 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:40:59.954 4269 ERROR nova.compute.manager >2018-06-28 10:40:59.955 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:40:59.955 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0231289863586 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:40:59.955 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:40:59.955 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:40:59.963 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-c787f881-3576-46e6-a417-d6d1e0e1ea13] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-c787f881-3576-46e6-a417-d6d1e0e1ea13", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:40:59.963 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:40:59.964 4269 ERROR nova.compute.manager >2018-06-28 10:41:50.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:41:50.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:41:50.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:41:50.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:41:52.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:41:54.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:41:54.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:41:54.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:41:54.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:41:56.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:41:57.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:41:57.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:42:00.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:42:00.736 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:42:00.736 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:42:00.736 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000679016113281 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:42:00.737 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:42:00.737 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:42:00.747 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-0e9fed26-bfff-4a3c-b291-85afb5a1a206] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-0e9fed26-bfff-4a3c-b291-85afb5a1a206", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:42:00.747 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:42:00.747 4269 ERROR nova.compute.manager >2018-06-28 10:42:00.748 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:42:00.748 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0126140117645 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:42:00.749 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:42:00.749 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:42:00.757 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-4e1660dd-d4b0-434d-8cb9-c3db4b54ed8f] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-4e1660dd-d4b0-434d-8cb9-c3db4b54ed8f", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:42:00.758 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:42:00.758 4269 ERROR nova.compute.manager >2018-06-28 10:42:00.759 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:42:00.759 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0231368541718 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:42:00.759 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:42:00.759 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:42:00.767 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-2743a585-f4eb-46e6-9dae-34c59ae9029b] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-2743a585-f4eb-46e6-9dae-34c59ae9029b", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:42:00.767 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:42:00.768 4269 ERROR nova.compute.manager >2018-06-28 10:42:12.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:42:52.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:42:52.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:42:52.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:42:52.668 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:42:53.653 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:42:54.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:42:54.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:42:55.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:42:55.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:42:57.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:42:57.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:42:58.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:43:02.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:43:02.735 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:43:02.735 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:43:02.736 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000710964202881 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:43:02.736 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:43:02.736 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:43:02.746 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-62a152df-35e4-47e4-8a8b-45abdb5ac331] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-62a152df-35e4-47e4-8a8b-45abdb5ac331", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:43:02.746 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:43:02.746 4269 ERROR nova.compute.manager >2018-06-28 10:43:02.747 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:43:02.747 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0122129917145 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:43:02.748 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:43:02.748 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:43:02.756 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-83fb4b7c-2904-4552-9e58-75117a25a2ea] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-83fb4b7c-2904-4552-9e58-75117a25a2ea", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:43:02.757 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:43:02.757 4269 ERROR nova.compute.manager >2018-06-28 10:43:02.758 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:43:02.758 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0228009223938 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:43:02.758 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:43:02.758 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:43:02.766 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-2ce6428f-af9a-45d7-a00a-87a5d0cf7b35] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-2ce6428f-af9a-45d7-a00a-87a5d0cf7b35", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:43:02.766 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:43:02.766 4269 ERROR nova.compute.manager >2018-06-28 10:43:54.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:43:54.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:43:54.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:43:54.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:43:54.653 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:43:55.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:43:55.654 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:43:57.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:43:57.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:43:57.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:43:59.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:44:00.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:44:04.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:44:04.923 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:44:04.923 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:44:04.924 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000713109970093 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:44:04.924 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:44:04.924 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:44:04.934 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-17ac612a-32f7-4930-8ed6-0c0c079bafc8] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-17ac612a-32f7-4930-8ed6-0c0c079bafc8", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:44:04.934 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:44:04.935 4269 ERROR nova.compute.manager >2018-06-28 10:44:04.935 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:44:04.935 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0123720169067 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:44:04.936 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:44:04.936 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:44:04.944 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-dc16d638-9d19-4420-a540-cb3e263468f3] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-dc16d638-9d19-4420-a540-cb3e263468f3", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:44:04.945 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:44:04.945 4269 ERROR nova.compute.manager >2018-06-28 10:44:04.946 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:44:04.946 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0228359699249 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:44:04.946 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:44:04.946 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:44:04.954 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-d51d1872-370f-4cfd-aee8-e35890b4314c] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-d51d1872-370f-4cfd-aee8-e35890b4314c", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:44:04.954 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.007s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:44:04.954 4269 ERROR nova.compute.manager >2018-06-28 10:44:15.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:44:42.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:44:42.654 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7862 >2018-06-28 10:44:42.666 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7871 >2018-06-28 10:44:55.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:44:55.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:44:55.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:44:55.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:44:55.653 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:44:56.653 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:44:56.654 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:44:57.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:44:58.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:44:59.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:45:01.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:45:01.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:45:05.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:45:05.732 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:45:05.732 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:45:05.732 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000594139099121 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:45:05.732 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:45:05.733 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:45:05.927 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-d252f4f5-0197-4577-b5e2-d300db2cc5f8] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-d252f4f5-0197-4577-b5e2-d300db2cc5f8", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:45:05.927 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.195s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:45:05.928 4269 ERROR nova.compute.manager >2018-06-28 10:45:05.928 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:45:05.929 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.197075128555 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:45:05.929 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:45:05.929 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:45:06.125 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-7f7bd760-226b-4ac4-aa4f-dfa558528184] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-7f7bd760-226b-4ac4-aa4f-dfa558528184", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:45:06.125 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.196s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:45:06.126 4269 ERROR nova.compute.manager >2018-06-28 10:45:06.126 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:45:06.126 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.394958019257 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:45:06.127 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:45:06.127 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:45:06.136 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-0ab91763-7050-4639-84f1-8c673db77d54] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-0ab91763-7050-4639-84f1-8c673db77d54", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:45:06.137 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:45:06.137 4269 ERROR nova.compute.manager >2018-06-28 10:45:09.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:45:09.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7905 >2018-06-28 10:45:18.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:45:55.651 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:45:57.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:45:57.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:45:57.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:45:57.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:45:57.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:45:57.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:45:57.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:45:58.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:46:01.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:46:01.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:46:03.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:46:07.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:46:07.926 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:46:07.926 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:46:07.926 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.00074291229248 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:46:07.927 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:46:07.927 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:46:07.936 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-3596fdd4-6ab5-4e02-b66b-97a008aa1f9c] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-3596fdd4-6ab5-4e02-b66b-97a008aa1f9c", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:46:07.937 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:46:07.937 4269 ERROR nova.compute.manager >2018-06-28 10:46:07.938 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:46:07.938 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.012069940567 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:46:07.938 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:46:07.938 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:46:07.946 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-0952f1c0-0c45-4675-8b8e-51fc80419d6e] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-0952f1c0-0c45-4675-8b8e-51fc80419d6e", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:46:07.947 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:46:07.947 4269 ERROR nova.compute.manager >2018-06-28 10:46:07.948 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:46:07.948 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0220968723297 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:46:07.948 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:46:07.948 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:46:07.955 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-c42355d3-3143-438a-be70-f23193277d53] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-c42355d3-3143-438a-be70-f23193277d53", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:46:07.956 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.007s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:46:07.956 4269 ERROR nova.compute.manager >2018-06-28 10:46:19.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:46:57.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:46:57.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:46:57.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:46:57.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:46:57.669 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:46:58.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:46:59.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:46:59.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:47:00.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:47:01.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:47:01.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:47:04.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:47:08.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:47:08.738 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:47:08.738 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:47:08.738 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.00068187713623 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:47:08.739 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:47:08.739 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:47:08.748 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-24c161bc-2e4a-4939-948d-910a4e57d85c] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-24c161bc-2e4a-4939-948d-910a4e57d85c", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:47:08.749 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:47:08.749 4269 ERROR nova.compute.manager >2018-06-28 10:47:08.749 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:47:08.750 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0121188163757 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:47:08.750 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:47:08.750 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:47:08.759 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-67320ec1-2b21-4f84-ae5f-a1a463296f37] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-67320ec1-2b21-4f84-ae5f-a1a463296f37", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:47:08.759 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:47:08.759 4269 ERROR nova.compute.manager >2018-06-28 10:47:08.760 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:47:08.760 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0227568149567 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:47:08.761 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:47:08.761 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:47:08.769 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-f3e57b9e-c9a7-4e1c-936a-d2dd1af3e7b2] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-f3e57b9e-c9a7-4e1c-936a-d2dd1af3e7b2", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:47:08.769 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:47:08.770 4269 ERROR nova.compute.manager >2018-06-28 10:47:58.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:47:59.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:47:59.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:47:59.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:47:59.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:47:59.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:48:00.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:48:00.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:48:01.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:48:03.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:48:03.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:48:04.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:48:09.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:48:09.738 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:48:09.738 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:48:09.739 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000633955001831 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:48:09.739 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:48:09.739 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:48:09.749 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-457bedc7-9680-43db-938d-bd6265be4be8] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-457bedc7-9680-43db-938d-bd6265be4be8", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:48:09.749 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:48:09.749 4269 ERROR nova.compute.manager >2018-06-28 10:48:09.750 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:48:09.750 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0120329856873 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:48:09.750 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:48:09.751 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:48:09.759 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-7b6a9ead-bea7-4b83-abdb-69e1644eb1c3] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-7b6a9ead-bea7-4b83-abdb-69e1644eb1c3", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:48:09.759 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:48:09.759 4269 ERROR nova.compute.manager >2018-06-28 10:48:09.760 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:48:09.760 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0220119953156 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:48:09.760 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:48:09.761 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:48:09.768 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-3d353217-67b8-43f8-81b2-13cbce402bc1] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-3d353217-67b8-43f8-81b2-13cbce402bc1", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:48:09.768 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:48:09.769 4269 ERROR nova.compute.manager >2018-06-28 10:48:20.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:48:59.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:48:59.657 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:48:59.657 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:48:59.670 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:49:00.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:49:00.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:49:01.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:49:01.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:49:02.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:49:03.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:49:05.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:49:05.650 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:49:10.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:49:10.928 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:49:10.928 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:49:10.928 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000741958618164 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:49:10.929 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:49:10.929 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:49:10.938 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-4e5b0abd-d7e3-4570-b88a-912e38e57b33] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-4e5b0abd-d7e3-4570-b88a-912e38e57b33", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:49:10.939 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:49:10.939 4269 ERROR nova.compute.manager >2018-06-28 10:49:10.940 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:49:10.940 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0122408866882 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:49:10.940 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:49:10.941 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:49:10.949 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-94741412-7c5c-4203-bf96-87531d60198c] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-94741412-7c5c-4203-bf96-87531d60198c", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:49:10.949 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:49:10.949 4269 ERROR nova.compute.manager >2018-06-28 10:49:10.950 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:49:10.950 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0224158763885 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:49:10.950 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:49:10.951 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:49:10.958 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-c99a61e4-101a-4756-9412-7a7a656a5ce1] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-c99a61e4-101a-4756-9412-7a7a656a5ce1", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:49:10.958 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.007s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:49:10.958 4269 ERROR nova.compute.manager >2018-06-28 10:49:43.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:49:43.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7862 >2018-06-28 10:49:43.654 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7871 >2018-06-28 10:50:01.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:50:01.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:50:01.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:50:01.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:50:01.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:50:02.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:50:03.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:50:03.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:50:04.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:50:04.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:50:05.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:50:06.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:50:10.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:50:10.741 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:50:10.742 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:50:10.742 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000721216201782 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:50:10.742 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:50:10.743 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:50:10.946 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-af1abe69-a864-4489-9a2c-c27cfd478c5f] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-af1abe69-a864-4489-9a2c-c27cfd478c5f", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:50:10.947 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.204s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:50:10.947 4269 ERROR nova.compute.manager >2018-06-28 10:50:10.948 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:50:10.948 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.206676006317 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:50:10.948 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:50:10.949 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:50:11.152 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-e5243c7a-1f58-42e2-a7e0-5f1abd108772] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-e5243c7a-1f58-42e2-a7e0-5f1abd108772", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:50:11.152 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.204s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:50:11.152 4269 ERROR nova.compute.manager >2018-06-28 10:50:11.153 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:50:11.153 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.412236213684 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:50:11.154 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:50:11.154 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:50:11.163 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-d8db75e0-5d47-4162-bff8-cd6ec217673d] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-d8db75e0-5d47-4162-bff8-cd6ec217673d", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:50:11.163 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:50:11.164 4269 ERROR nova.compute.manager >2018-06-28 10:50:11.164 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:50:11.165 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7905 >2018-06-28 10:50:19.651 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_bandwidth_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:50:19.651 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:50:25.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:51:01.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:51:01.657 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:51:01.657 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:51:01.657 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:51:01.672 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:51:03.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:51:03.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:51:04.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:51:05.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:51:06.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:51:07.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:51:07.651 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:51:12.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:51:12.730 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:51:12.731 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:51:12.731 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000687122344971 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:51:12.731 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:51:12.732 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:51:12.741 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-e5dd64b5-adc3-4198-9b40-0e19dafdf740] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-e5dd64b5-adc3-4198-9b40-0e19dafdf740", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:51:12.741 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:51:12.741 4269 ERROR nova.compute.manager >2018-06-28 10:51:12.742 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:51:12.742 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0116930007935 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:51:12.742 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:51:12.743 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:51:12.751 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-1d7b9413-3599-4d06-9de8-09dedceea536] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-1d7b9413-3599-4d06-9de8-09dedceea536", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:51:12.751 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:51:12.751 4269 ERROR nova.compute.manager >2018-06-28 10:51:12.752 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:51:12.752 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0216941833496 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:51:12.752 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:51:12.753 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:51:12.760 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-d465a8bc-99a0-4d66-89b9-7d09a04a9984] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-d465a8bc-99a0-4d66-89b9-7d09a04a9984", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:51:12.760 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:51:12.760 4269 ERROR nova.compute.manager >2018-06-28 10:52:03.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:52:03.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:52:03.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:52:03.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:52:03.642 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:52:03.642 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:52:03.654 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:52:04.653 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:52:06.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:52:07.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:52:07.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:52:08.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:52:12.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:52:12.732 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:52:12.732 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:52:12.732 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000686883926392 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:52:12.733 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:52:12.733 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:52:12.743 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-c0b7a159-6b93-4729-9987-868430716e54] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-c0b7a159-6b93-4729-9987-868430716e54", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:52:12.743 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:52:12.743 4269 ERROR nova.compute.manager >2018-06-28 10:52:12.744 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:52:12.744 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.012403011322 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:52:12.744 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:52:12.745 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:52:12.753 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-9318f356-1370-4a72-810e-73c8ba4de714] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-9318f356-1370-4a72-810e-73c8ba4de714", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:52:12.753 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:52:12.753 4269 ERROR nova.compute.manager >2018-06-28 10:52:12.754 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:52:12.754 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0225629806519 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:52:12.754 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:52:12.755 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:52:12.762 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-824a6be7-b3cf-4c96-8f68-97e06967854b] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-824a6be7-b3cf-4c96-8f68-97e06967854b", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:52:12.763 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:52:12.763 4269 ERROR nova.compute.manager >2018-06-28 10:52:30.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:53:04.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:53:04.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:53:04.656 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:53:04.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:53:04.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:53:04.669 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:53:05.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:53:06.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:53:07.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:53:07.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:53:09.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:53:09.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:53:12.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:53:12.917 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:53:12.917 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:53:12.917 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.00062894821167 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:53:12.918 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:53:12.918 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:53:12.928 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-26f6f3ba-da4e-437f-abc6-07ffcbaa53c9] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-26f6f3ba-da4e-437f-abc6-07ffcbaa53c9", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:53:12.928 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:53:12.928 4269 ERROR nova.compute.manager >2018-06-28 10:53:12.929 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:53:12.929 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.012405872345 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:53:12.930 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:53:12.930 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:53:12.938 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-c24672f8-8237-440b-be33-3f06fbbd61a6] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-c24672f8-8237-440b-be33-3f06fbbd61a6", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:53:12.938 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:53:12.938 4269 ERROR nova.compute.manager >2018-06-28 10:53:12.939 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:53:12.939 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0222609043121 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:53:12.939 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:53:12.940 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:53:12.947 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-254ae0d8-5762-4622-9caa-e9ea5224e5e4] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-254ae0d8-5762-4622-9caa-e9ea5224e5e4", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:53:12.947 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:53:12.947 4269 ERROR nova.compute.manager >2018-06-28 10:54:05.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:54:05.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:54:05.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:54:05.652 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:54:06.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:54:06.653 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:54:06.653 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:54:06.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:54:07.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:54:09.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:54:09.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:54:09.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:54:12.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:54:12.925 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:54:12.925 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:54:12.926 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000734090805054 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:54:12.926 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:54:12.926 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:54:12.936 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-34a30d7b-bae2-4295-b944-c5daa26a7303] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-34a30d7b-bae2-4295-b944-c5daa26a7303", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:54:12.936 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:54:12.936 4269 ERROR nova.compute.manager >2018-06-28 10:54:12.937 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:54:12.937 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0119950771332 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:54:12.937 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:54:12.938 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:54:12.946 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-6e76b277-5bef-4fd3-a963-3e05809ec6c6] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-6e76b277-5bef-4fd3-a963-3e05809ec6c6", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:54:12.946 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:54:12.946 4269 ERROR nova.compute.manager >2018-06-28 10:54:12.947 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:54:12.947 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0222249031067 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:54:12.948 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:54:12.948 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:54:12.956 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-78a2335f-8bad-4f15-824d-768c9cbbeefd] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-78a2335f-8bad-4f15-824d-768c9cbbeefd", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:54:12.956 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:54:12.956 4269 ERROR nova.compute.manager >2018-06-28 10:54:31.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:54:52.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:54:52.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7862 >2018-06-28 10:54:52.668 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7871 >2018-06-28 10:55:06.653 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:55:06.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:55:07.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:55:07.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:55:07.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:55:07.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:55:07.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:55:07.642 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:55:07.654 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:55:09.651 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:55:10.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:55:10.653 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:55:11.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:55:11.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7905 >2018-06-28 10:55:12.653 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:55:12.743 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:55:12.743 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:55:12.744 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000586032867432 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:55:12.744 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:55:12.744 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:55:12.944 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-f09c3ca9-1dae-452d-99b1-10124d2502fd] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-f09c3ca9-1dae-452d-99b1-10124d2502fd", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:55:12.944 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.200s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:55:12.944 4269 ERROR nova.compute.manager >2018-06-28 10:55:12.945 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:55:12.945 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.202460050583 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:55:12.946 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:55:12.946 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:55:13.136 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-622f6257-616c-4c2d-a148-7813e0b2d8fc] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-622f6257-616c-4c2d-a148-7813e0b2d8fc", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:55:13.136 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.190s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:55:13.137 4269 ERROR nova.compute.manager >2018-06-28 10:55:13.137 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:55:13.137 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.394304990768 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:55:13.138 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:55:13.138 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:55:13.146 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-459a42c5-a6d3-4ebc-823f-786c1162f9a0] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-459a42c5-a6d3-4ebc-823f-786c1162f9a0", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:55:13.146 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:55:13.146 4269 ERROR nova.compute.manager >2018-06-28 10:55:33.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:56:06.649 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:56:07.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:56:07.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:56:08.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:56:08.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:56:08.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:56:08.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:56:08.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:56:08.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:56:10.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:56:11.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:56:12.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:56:12.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:56:12.734 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:56:12.735 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:56:12.735 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000653982162476 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:56:12.735 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:56:12.736 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:56:12.745 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-72013a08-6e81-4c49-9b9c-037a72c112da] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-72013a08-6e81-4c49-9b9c-037a72c112da", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:56:12.746 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:56:12.746 4269 ERROR nova.compute.manager >2018-06-28 10:56:12.747 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:56:12.747 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0128359794617 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:56:12.747 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:56:12.748 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:56:12.756 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-183dc410-59a9-42b9-9513-54bf51b41bf3] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-183dc410-59a9-42b9-9513-54bf51b41bf3", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:56:12.756 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:56:12.757 4269 ERROR nova.compute.manager >2018-06-28 10:56:12.757 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:56:12.757 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0232241153717 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:56:12.758 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:56:12.758 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:56:12.765 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-8f143c10-adf0-49bc-8197-b7adafc06f36] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-8f143c10-adf0-49bc-8197-b7adafc06f36", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:56:12.766 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:56:12.766 4269 ERROR nova.compute.manager >2018-06-28 10:56:36.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:57:07.660 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:57:08.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:57:09.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:57:10.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:57:10.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:57:10.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:57:10.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:57:10.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:57:10.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:57:10.659 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:57:11.660 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:57:12.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:57:14.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:57:14.735 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:57:14.736 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:57:14.736 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.00069785118103 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:57:14.736 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:57:14.737 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:57:14.746 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-2dc18931-b10b-4922-a56d-027b564a140c] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-2dc18931-b10b-4922-a56d-027b564a140c", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:57:14.746 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:57:14.746 4269 ERROR nova.compute.manager >2018-06-28 10:57:14.747 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:57:14.747 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0119178295135 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:57:14.747 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:57:14.748 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:57:14.756 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-16671415-1453-4441-bbe4-2f60182a5136] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-16671415-1453-4441-bbe4-2f60182a5136", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:57:14.756 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:57:14.757 4269 ERROR nova.compute.manager >2018-06-28 10:57:14.757 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:57:14.757 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0221829414368 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:57:14.758 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:57:14.758 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:57:14.765 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-68b4b604-f085-4445-98ba-aa3e1d9db948] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-68b4b604-f085-4445-98ba-aa3e1d9db948", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:57:14.766 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:57:14.766 4269 ERROR nova.compute.manager >2018-06-28 10:58:07.766 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:58:10.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:58:10.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:58:10.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:58:10.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:58:10.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:58:10.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:58:10.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:58:10.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:58:11.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:58:12.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:58:13.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:58:15.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:58:15.739 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:58:15.739 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:58:15.740 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000693082809448 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:58:15.740 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:58:15.740 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:58:15.750 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-d1d280c3-0e29-45c5-85ff-ac0502599667] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-d1d280c3-0e29-45c5-85ff-ac0502599667", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:58:15.750 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:58:15.751 4269 ERROR nova.compute.manager >2018-06-28 10:58:15.751 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:58:15.751 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0123710632324 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:58:15.752 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:58:15.752 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:58:15.760 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-e215d0cf-de05-4877-ac99-636504af450f] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-e215d0cf-de05-4877-ac99-636504af450f", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:58:15.760 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:58:15.761 4269 ERROR nova.compute.manager >2018-06-28 10:58:15.761 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:58:15.761 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0223610401154 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:58:15.762 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:58:15.762 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:58:15.769 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-896d6445-e28a-45d5-bc7e-e65b6bfd8813] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-896d6445-e28a-45d5-bc7e-e65b6bfd8813", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:58:15.770 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:58:15.770 4269 ERROR nova.compute.manager >2018-06-28 10:58:36.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:59:08.659 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:59:10.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:59:10.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 10:59:11.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:59:11.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:59:12.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:59:12.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:59:12.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 10:59:12.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 10:59:12.654 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 10:59:13.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:59:14.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:59:14.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:59:17.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 10:59:17.928 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 10:59:17.929 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:59:17.929 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.00065803527832 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:59:17.929 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:59:17.930 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:59:17.939 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-5a22cd88-9fed-42c7-83ee-ea77519db455] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-5a22cd88-9fed-42c7-83ee-ea77519db455", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:59:17.939 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 10:59:17.939 4269 ERROR nova.compute.manager >2018-06-28 10:59:17.940 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:59:17.940 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0116889476776 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:59:17.940 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:59:17.941 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:59:17.949 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-a58f0a8b-2cd1-4a0d-82c3-a979aba00240] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-a58f0a8b-2cd1-4a0d-82c3-a979aba00240", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:59:17.949 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 10:59:17.949 4269 ERROR nova.compute.manager >2018-06-28 10:59:17.950 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 10:59:17.950 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0217599868774 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 10:59:17.950 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 10:59:17.951 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 10:59:17.958 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-a2e0178c-6667-4b50-83e3-b011e26da746] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-a2e0178c-6667-4b50-83e3-b011e26da746", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 10:59:17.958 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.007s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 10:59:17.958 4269 ERROR nova.compute.manager >2018-06-28 11:00:01.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:00:01.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7862 >2018-06-28 11:00:01.656 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7871 >2018-06-28 11:00:08.657 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:00:11.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:00:11.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 11:00:11.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:00:12.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:00:12.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:00:12.681 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:00:13.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:00:13.640 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 11:00:13.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 11:00:13.653 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 11:00:14.654 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:00:14.684 4269 INFO nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running instance usage audit for host undercloud-0.redhat.local from 2018-06-28 14:00:00 to 2018-06-28 15:00:00. 0 instances. >2018-06-28 11:00:15.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:00:19.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:00:19.914 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 11:00:19.914 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 11:00:19.915 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000664234161377 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 11:00:19.915 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 11:00:19.915 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 11:00:20.110 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-542d35fc-5e94-4fac-a259-75bc1a4a5aa6] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-542d35fc-5e94-4fac-a259-75bc1a4a5aa6", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 11:00:20.110 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.195s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 11:00:20.110 4269 ERROR nova.compute.manager >2018-06-28 11:00:20.111 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 11:00:20.112 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.197592020035 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 11:00:20.112 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 11:00:20.112 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 11:00:20.308 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-b3ede78c-8742-416a-bd46-360b2253dd16] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-b3ede78c-8742-416a-bd46-360b2253dd16", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 11:00:20.308 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.196s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 11:00:20.309 4269 ERROR nova.compute.manager >2018-06-28 11:00:20.309 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 11:00:20.309 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.395396232605 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 11:00:20.310 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 11:00:20.310 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 11:00:20.319 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-24a6e2c9-2601-4429-a3b6-ab02ec5ca4dd] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-24a6e2c9-2601-4429-a3b6-ab02ec5ca4dd", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 11:00:20.319 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 11:00:20.319 4269 ERROR nova.compute.manager >2018-06-28 11:00:25.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:00:25.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7905 >2018-06-28 11:00:36.648 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:00:41.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:00:48.649 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_bandwidth_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:01:10.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:01:12.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:01:13.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:01:13.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:01:13.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 11:01:13.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:01:15.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:01:15.651 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:01:15.651 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:01:15.652 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 11:01:15.652 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 11:01:15.663 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 11:01:19.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:01:19.749 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 11:01:19.749 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 11:01:19.749 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000736951828003 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 11:01:19.750 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 11:01:19.750 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 11:01:19.759 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-c7b8b737-c119-44f4-87c9-b9d274d43657] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-c7b8b737-c119-44f4-87c9-b9d274d43657", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 11:01:19.760 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 11:01:19.760 4269 ERROR nova.compute.manager >2018-06-28 11:01:19.761 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 11:01:19.761 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0123250484467 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 11:01:19.761 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 11:01:19.762 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 11:01:19.770 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-148ca748-e573-4890-a1c5-1b022d9a65a8] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-148ca748-e573-4890-a1c5-1b022d9a65a8", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 11:01:19.770 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 11:01:19.770 4269 ERROR nova.compute.manager >2018-06-28 11:01:19.771 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 11:01:19.771 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0224659442902 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 11:01:19.771 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 11:01:19.772 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 11:01:19.779 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-453e2fa1-5b94-47dc-910c-880ae10b07cc] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-453e2fa1-5b94-47dc-910c-880ae10b07cc", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 11:01:19.779 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.007s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 11:01:19.779 4269 ERROR nova.compute.manager >2018-06-28 11:02:11.767 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:02:13.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:02:14.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:02:14.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:02:15.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:02:15.650 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:02:15.651 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 11:02:16.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:02:16.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:02:16.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 11:02:16.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 11:02:16.654 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 11:02:19.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:02:19.752 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 11:02:19.753 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 11:02:19.753 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000592947006226 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 11:02:19.753 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 11:02:19.754 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 11:02:19.763 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-df57e195-2608-4264-ab16-5d7275a8a24d] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-df57e195-2608-4264-ab16-5d7275a8a24d", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 11:02:19.764 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.010s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 11:02:19.764 4269 ERROR nova.compute.manager >2018-06-28 11:02:19.765 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 11:02:19.765 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0125241279602 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 11:02:19.765 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 11:02:19.766 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 11:02:19.774 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-6ed5d5de-e7af-42b8-a000-00f1701622ce] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-6ed5d5de-e7af-42b8-a000-00f1701622ce", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 11:02:19.774 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 11:02:19.774 4269 ERROR nova.compute.manager >2018-06-28 11:02:19.775 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 11:02:19.775 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.02281498909 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 11:02:19.776 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 11:02:19.776 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 11:02:19.783 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-e3aa947b-ad23-44a6-b426-416d6297b379] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-e3aa947b-ad23-44a6-b426-416d6297b379", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 11:02:19.783 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.007s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 11:02:19.784 4269 ERROR nova.compute.manager >2018-06-28 11:02:39.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:03:12.660 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:03:13.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:03:15.638 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:03:15.641 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:03:15.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-06-28 11:03:16.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:03:16.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:03:16.652 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:03:17.640 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:03:17.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-06-28 11:03:17.641 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-06-28 11:03:17.655 4269 DEBUG nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-06-28 11:03:19.655 4269 DEBUG oslo_service.periodic_task [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-06-28 11:03:19.748 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Returning 3 available node(s) get_available_nodes /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:733 >2018-06-28 11:03:19.748 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: 55611eb8-c4fa-4576-ae28-d2017563fdd0) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 11:03:19.748 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node 55611eb8-c4fa-4576-ae28-d2017563fdd0, age: 0.000566005706787 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 11:03:19.749 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=55611eb8-c4fa-4576-ae28-d2017563fdd0 free_ram=6144MB free_disk=49GB free_vcpus=4 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 11:03:19.749 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 11:03:19.758 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-75689be1-d5b4-44b2-a9b2-07192f7a5cf1] Failed to retrieve resource provider tree from placement API for UUID e05f557d-da64-4714-9ce3-f7df5921f5e1. Got 500: {"errors": [{"status": 500, "request_id": "req-75689be1-d5b4-44b2-a9b2-07192f7a5cf1", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 11:03:19.758 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.009s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node 55611eb8-c4fa-4576-ae28-d2017563fdd0.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID e05f557d-da64-4714-9ce3-f7df5921f5e1 >2018-06-28 11:03:19.759 4269 ERROR nova.compute.manager >2018-06-28 11:03:19.759 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: d56ae6cc-b350-42fd-b0ba-40b6bfa6af02) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 11:03:19.760 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02, age: 0.0117950439453 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 11:03:19.760 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=d56ae6cc-b350-42fd-b0ba-40b6bfa6af02 free_ram=4096MB free_disk=19GB free_vcpus=2 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 11:03:19.760 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 11:03:19.768 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-5fbb79e0-3605-4239-b5fa-5fa158925717] Failed to retrieve resource provider tree from placement API for UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8. Got 500: {"errors": [{"status": 500, "request_id": "req-5fbb79e0-3605-4239-b5fa-5fa158925717", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 11:03:19.769 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.008s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node d56ae6cc-b350-42fd-b0ba-40b6bfa6af02.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 28ad31d1-4317-4c61-83db-cbe5683e12c8 >2018-06-28 11:03:19.769 4269 ERROR nova.compute.manager >2018-06-28 11:03:19.769 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Auditing locally available compute resources for undercloud-0.redhat.local (node: c40592fd-6b81-4279-8496-8a3c5da28f52) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-06-28 11:03:19.770 4269 DEBUG nova.virt.ironic.driver [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Using cache for node c40592fd-6b81-4279-8496-8a3c5da28f52, age: 0.0216429233551 _node_from_cache /usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py:837 >2018-06-28 11:03:19.770 4269 DEBUG nova.compute.resource_tracker [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Hypervisor/Node resource view: name=c40592fd-6b81-4279-8496-8a3c5da28f52 free_ram=32768MB free_disk=39GB free_vcpus=8 pci_devices=None _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-06-28 11:03:19.770 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-06-28 11:03:19.777 4269 ERROR nova.scheduler.client.report [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] [req-757e8626-0dd4-49ab-b6f9-ceb49a61af75] Failed to retrieve resource provider tree from placement API for UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4. Got 500: {"errors": [{"status": 500, "request_id": "req-757e8626-0dd4-49ab-b6f9-ceb49a61af75", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n 'MIMEAccept' object has no attribute 'acceptable_offers' ", "title": "Internal Server Error"}]}. >2018-06-28 11:03:19.777 4269 DEBUG oslo_concurrency.lockutils [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.007s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager [req-823b8c68-bab7-459d-8ad8-88900060e8a6 - - - - -] Error updating resources for node c40592fd-6b81-4279-8496-8a3c5da28f52.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager Traceback (most recent call last): >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7457, in _update_available_resource_for_node >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager rt.update_available_resource(context, nodename) >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 686, in update_available_resource >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager self._update_available_resource(context, resources) >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager return f(*args, **kwargs) >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 710, in _update_available_resource >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager self._init_compute_node(context, resources) >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager self._update(context, cn) >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 883, in _update >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager return getattr(self.instance, __name)(*args, **kwargs) >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 990, in get_provider_tree_and_ensure_root >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 653, in _ensure_resource_provider >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 66, in wrapper >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager return f(self, *a, **k) >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 520, in _get_providers_in_tree >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID f7ef599c-ecfe-4fc0-aee8-87e6126257a4 >2018-06-28 11:03:19.778 4269 ERROR nova.compute.manager
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1596308
: 1455363 |
1455365