Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1457521 Details for
Bug 1599372
Detached volumes are stuck in "detaching" state
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
compute:/var/log/containers/nova/nova-compute.log
nova-compute.log (text/plain), 717.91 KB, created by
Filip Hubík
on 2018-07-09 16:00:36 UTC
(
hide
)
Description:
compute:/var/log/containers/nova/nova-compute.log
Filename:
MIME Type:
Creator:
Filip Hubík
Created:
2018-07-09 16:00:36 UTC
Size:
717.91 KB
patch
obsolete
>2018-07-09 14:17:45.291 1 DEBUG oslo_service.periodic_task [req-e861832a-33c0-48ff-bf9e-96ebc2157cd3 - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:17:45.292 1 DEBUG nova.compute.manager [req-e861832a-33c0-48ff-bf9e-96ebc2157cd3 - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:17:45.292 1 DEBUG nova.compute.manager [req-e861832a-33c0-48ff-bf9e-96ebc2157cd3 - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:17:45.309 1 DEBUG nova.compute.manager [req-e861832a-33c0-48ff-bf9e-96ebc2157cd3 - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:17:46.310 1 DEBUG oslo_service.periodic_task [req-e861832a-33c0-48ff-bf9e-96ebc2157cd3 - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:17:46.328 1 DEBUG oslo_service.periodic_task [req-e861832a-33c0-48ff-bf9e-96ebc2157cd3 - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:17:46.328 1 DEBUG nova.compute.manager [req-e861832a-33c0-48ff-bf9e-96ebc2157cd3 - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:18:04.764 1 DEBUG oslo_concurrency.lockutils [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" acquired by "nova.compute.manager._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:18:04.788 1 DEBUG nova.compute.manager [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Starting instance... _do_build_and_run_instance /usr/lib/python2.7/site-packages/nova/compute/manager.py:1792 >2018-07-09 14:18:04.938 1 DEBUG oslo_concurrency.lockutils [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.instance_claim" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:18:04.940 1 DEBUG nova.compute.resource_tracker [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Memory overhead for 64 MB instance; 0 MB instance_claim /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:202 >2018-07-09 14:18:04.940 1 DEBUG nova.compute.resource_tracker [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Disk overhead for 0 GB instance; 0 GB instance_claim /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:205 >2018-07-09 14:18:04.941 1 DEBUG nova.compute.resource_tracker [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CPU overhead for 1 vCPUs instance; 0 vCPU(s) instance_claim /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:208 >2018-07-09 14:18:04.957 1 INFO nova.compute.claims [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Attempting claim on node compute-0.localdomain: memory 64 MB, disk 0 GB, vcpus 1 CPU >2018-07-09 14:18:04.958 1 INFO nova.compute.claims [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Total memory: 6143 MB, used: 4096.00 MB >2018-07-09 14:18:04.958 1 INFO nova.compute.claims [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] memory limit not specified, defaulting to unlimited >2018-07-09 14:18:04.958 1 INFO nova.compute.claims [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Total disk: 39 GB, used: 0.00 GB >2018-07-09 14:18:04.959 1 INFO nova.compute.claims [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] disk limit not specified, defaulting to unlimited >2018-07-09 14:18:04.959 1 INFO nova.compute.claims [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Total vcpu: 4 VCPU, used: 0.00 VCPU >2018-07-09 14:18:04.959 1 INFO nova.compute.claims [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] vcpu limit not specified, defaulting to unlimited >2018-07-09 14:18:04.960 1 DEBUG nova.virt.hardware [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python2.7/site-packages/nova/virt/hardware.py:1561 >2018-07-09 14:18:04.961 1 INFO nova.compute.claims [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Claim successful on node compute-0.localdomain >2018-07-09 14:18:05.181 1 DEBUG nova.scheduler.client.report [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:18:05.201 1 DEBUG nova.scheduler.client.report [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:18:05.289 1 DEBUG oslo_concurrency.lockutils [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker.instance_claim" :: held 0.351s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:18:05.290 1 DEBUG nova.compute.manager [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Start building networks asynchronously for instance. _build_resources /usr/lib/python2.7/site-packages/nova/compute/manager.py:2159 >2018-07-09 14:18:05.376 1 DEBUG nova.compute.manager [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Allocating IP information in the background. _allocate_network_async /usr/lib/python2.7/site-packages/nova/compute/manager.py:1372 >2018-07-09 14:18:05.376 1 DEBUG nova.network.neutronv2.api [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] allocate_for_instance() allocate_for_instance /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:913 >2018-07-09 14:18:05.398 1 DEBUG nova.block_device [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] block_device_list [] volume_in_mapping /usr/lib/python2.7/site-packages/nova/block_device.py:575 >2018-07-09 14:18:05.399 1 DEBUG nova.compute.manager [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Start building block device mappings for instance. _build_resources /usr/lib/python2.7/site-packages/nova/compute/manager.py:2190 >2018-07-09 14:18:05.466 1 DEBUG nova.network.neutronv2.api [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] No network configured allocate_for_instance /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:936 >2018-07-09 14:18:05.467 1 DEBUG nova.compute.manager [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Instance network_info: |[]| _allocate_network_async /usr/lib/python2.7/site-packages/nova/compute/manager.py:1386 >2018-07-09 14:18:05.487 1 DEBUG oslo_concurrency.lockutils [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "placement_client" acquired by "nova.scheduler.client.report._create_client" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:18:05.491 1 DEBUG oslo_concurrency.lockutils [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "placement_client" released by "nova.scheduler.client.report._create_client" :: held 0.003s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:18:06.493 1 DEBUG nova.compute.manager [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python2.7/site-packages/nova/compute/manager.py:2006 >2018-07-09 14:18:06.493 1 DEBUG nova.block_device [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] block_device_list [] volume_in_mapping /usr/lib/python2.7/site-packages/nova/block_device.py:575 >2018-07-09 14:18:06.495 1 INFO nova.virt.libvirt.driver [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Creating image >2018-07-09 14:18:06.532 1 DEBUG nova.virt.libvirt.storage.rbd_utils [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] rbd image 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e_disk does not exist __init__ /usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:74 >2018-07-09 14:18:06.569 1 DEBUG nova.virt.libvirt.storage.rbd_utils [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] rbd image 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e_disk does not exist __init__ /usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:74 >2018-07-09 14:18:06.573 1 DEBUG oslo_concurrency.lockutils [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "5b0904df4f49e357f8520a218e9db1e083b07d07" acquired by "nova.virt.libvirt.imagebackend.fetch_func_sync" :: waited 0.001s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:18:06.573 1 DEBUG oslo_concurrency.lockutils [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "5b0904df4f49e357f8520a218e9db1e083b07d07" released by "nova.virt.libvirt.imagebackend.fetch_func_sync" :: held 0.001s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:18:06.605 1 DEBUG nova.virt.libvirt.storage.rbd_utils [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] rbd image 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e_disk does not exist __init__ /usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:74 >2018-07-09 14:18:06.608 1 DEBUG oslo_concurrency.processutils [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5b0904df4f49e357f8520a218e9db1e083b07d07 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372 >2018-07-09 14:18:06.787 1 DEBUG oslo_concurrency.processutils [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5b0904df4f49e357f8520a218e9db1e083b07d07 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409 >2018-07-09 14:18:06.788 1 DEBUG nova.virt.libvirt.driver [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Ensure instance console log exists: /var/lib/nova/instances/6b44d4d1-7d58-4b81-b6ba-7e773bffa94e/console.log _ensure_console_log_for_instance /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:3274 >2018-07-09 14:18:06.789 1 DEBUG oslo_concurrency.lockutils [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:18:06.789 1 DEBUG oslo_concurrency.lockutils [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "vgpu_resources" released by "nova.virt.libvirt.driver._allocate_mdevs" :: held 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:18:06.790 1 DEBUG nova.virt.libvirt.driver [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'ide', 'mapping': {'disk': {'bus': 'virtio', 'boot_index': '1', 'type': 'disk', 'dev': u'vda'}, 'root': {'bus': 'virtio', 'boot_index': '1', 'type': 'disk', 'dev': u'vda'}}} image_meta=ImageMeta(checksum='7ef58c0f9aa6136021cb61a5d4f275e5',container_format='bare',created_at=2018-07-05T10:18:49Z,direct_url=<?>,disk_format='qcow2',id=52b458a6-e709-434c-8918-5183fe30e9d4,min_disk=0,min_ram=0,name='cirros-0.3.5-x86_64-uec.tar.gz',owner='ca310de510714829bfdfa7879154dc08',properties=ImageMetaProps,protected=<?>,size=8683335,status='active',tags=<?>,updated_at=2018-07-05T10:18:51Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'swap': None, 'root_device_name': u'/dev/vda', 'ephemerals': [], 'block_device_mapping': []} _get_guest_xml /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:5426 >2018-07-09 14:18:06.795 1 DEBUG nova.virt.libvirt.driver [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CPU mode 'host-model' model '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:3840 >2018-07-09 14:18:06.795 1 DEBUG nova.virt.hardware [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Getting desirable topologies for flavor Flavor(created_at=2018-07-05T10:18:46Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={},flavorid='4fc65822-b6a3-4609-a0ff-abd67de0b86d',id=1,is_public=True,memory_mb=64,name='m1.nano',projects=<?>,root_gb=0,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='7ef58c0f9aa6136021cb61a5d4f275e5',container_format='bare',created_at=2018-07-05T10:18:49Z,direct_url=<?>,disk_format='qcow2',id=52b458a6-e709-434c-8918-5183fe30e9d4,min_disk=0,min_ram=0,name='cirros-0.3.5-x86_64-uec.tar.gz',owner='ca310de510714829bfdfa7879154dc08',properties=ImageMetaProps,protected=<?>,size=8683335,status='active',tags=<?>,updated_at=2018-07-05T10:18:51Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python2.7/site-packages/nova/virt/hardware.py:562 >2018-07-09 14:18:06.796 1 DEBUG nova.virt.hardware [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Flavor limits 0:0:0 _get_cpu_topology_constraints /usr/lib/python2.7/site-packages/nova/virt/hardware.py:310 >2018-07-09 14:18:06.797 1 DEBUG nova.virt.hardware [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Image limits 0:0:0 _get_cpu_topology_constraints /usr/lib/python2.7/site-packages/nova/virt/hardware.py:314 >2018-07-09 14:18:06.797 1 DEBUG nova.virt.hardware [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Flavor pref 0:0:0 _get_cpu_topology_constraints /usr/lib/python2.7/site-packages/nova/virt/hardware.py:346 >2018-07-09 14:18:06.797 1 DEBUG nova.virt.hardware [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Image pref 0:0:0 _get_cpu_topology_constraints /usr/lib/python2.7/site-packages/nova/virt/hardware.py:350 >2018-07-09 14:18:06.798 1 DEBUG nova.virt.hardware [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 _get_cpu_topology_constraints /usr/lib/python2.7/site-packages/nova/virt/hardware.py:390 >2018-07-09 14:18:06.798 1 DEBUG nova.virt.hardware [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python2.7/site-packages/nova/virt/hardware.py:566 >2018-07-09 14:18:06.798 1 DEBUG nova.virt.hardware [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python2.7/site-packages/nova/virt/hardware.py:429 >2018-07-09 14:18:06.799 1 DEBUG nova.virt.hardware [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python2.7/site-packages/nova/virt/hardware.py:456 >2018-07-09 14:18:06.799 1 DEBUG nova.virt.hardware [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python2.7/site-packages/nova/virt/hardware.py:571 >2018-07-09 14:18:06.799 1 DEBUG nova.virt.hardware [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python2.7/site-packages/nova/virt/hardware.py:597 >2018-07-09 14:18:06.802 1 DEBUG nova.privsep.utils [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python2.7/site-packages/nova/privsep/utils.py:52 >2018-07-09 14:18:06.806 1 DEBUG oslo_concurrency.processutils [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372 >2018-07-09 14:18:07.137 1 DEBUG oslo_concurrency.processutils [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409 >2018-07-09 14:18:07.147 1 DEBUG nova.objects.instance [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lazy-loading 'pci_devices' on Instance uuid 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:18:07.165 1 DEBUG nova.virt.libvirt.driver [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] End _get_guest_xml xml=<domain type="kvm"> > <uuid>6b44d4d1-7d58-4b81-b6ba-7e773bffa94e</uuid> > <name>instance-0000006a</name> > <memory>65536</memory> > <vcpu>1</vcpu> > <metadata> > <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0"> > <nova:package version="18.0.0-0.20180629234416.54343e9.el7ost"/> > <nova:name>tempest-VolumesAdminNegativeTest-server-1975929041</nova:name> > <nova:creationTime>2018-07-09 14:18:06</nova:creationTime> > <nova:flavor name="m1.nano"> > <nova:memory>64</nova:memory> > <nova:disk>0</nova:disk> > <nova:swap>0</nova:swap> > <nova:ephemeral>0</nova:ephemeral> > <nova:vcpus>1</nova:vcpus> > </nova:flavor> > <nova:owner> > <nova:user uuid="c63d372255b547688c3f4ebac60ee839">tempest-VolumesAdminNegativeTest-1401844831</nova:user> > <nova:project uuid="c00c33e0c4644d3694a3a96dc897e47a">tempest-VolumesAdminNegativeTest-1401844831</nova:project> > </nova:owner> > <nova:root type="image" uuid="52b458a6-e709-434c-8918-5183fe30e9d4"/> > </nova:instance> > </metadata> > <sysinfo type="smbios"> > <system> > <entry name="manufacturer">Red Hat</entry> > <entry name="product">OpenStack Compute</entry> > <entry name="version">18.0.0-0.20180629234416.54343e9.el7ost</entry> > <entry name="serial">9acb7829-e53f-45ac-bc93-2c97c861fae8</entry> > <entry name="uuid">6b44d4d1-7d58-4b81-b6ba-7e773bffa94e</entry> > <entry name="family">Virtual Machine</entry> > </system> > </sysinfo> > <os> > <type>hvm</type> > <boot dev="hd"/> > <smbios mode="sysinfo"/> > </os> > <features> > <acpi/> > <apic/> > </features> > <cputune> > <shares>1024</shares> > </cputune> > <clock offset="utc"> > <timer name="pit" tickpolicy="delay"/> > <timer name="rtc" tickpolicy="catchup"/> > <timer name="hpet" present="no"/> > </clock> > <cpu mode="host-model" match="exact"> > <topology sockets="1" cores="1" threads="1"/> > </cpu> > <devices> > <disk type="network" device="disk"> > <driver type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="vms/6b44d4d1-7d58-4b81-b6ba-7e773bffa94e_disk"> > <host name="172.17.3.22" port="6789"/> > </source> > <auth username="openstack"> > <secret type="ceph" uuid="a7f60d80-8032-11e8-8802-52540031427e"/> > </auth> > <target bus="virtio" dev="vda"/> > </disk> > <serial type="pty"> > <log file="/var/lib/nova/instances/6b44d4d1-7d58-4b81-b6ba-7e773bffa94e/console.log" append="off"/> > </serial> > <input type="tablet" bus="usb"/> > <graphics type="vnc" autoport="yes" keymap="en-us" listen="172.17.1.13"/> > <video> > <model type="cirrus"/> > </video> > <memballoon model="virtio"> > <stats period="10"/> > </memballoon> > </devices> ></domain> > _get_guest_xml /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:5433 >2018-07-09 14:18:07.261 1 DEBUG nova.virt.libvirt.driver [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:8687 >2018-07-09 14:18:07.891 1 DEBUG nova.virt.libvirt.driver [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Guest created on hypervisor spawn /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:3080 >2018-07-09 14:18:07.892 1 DEBUG nova.virt.driver [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Emitting event <LifecycleEvent: 1531145887.89, 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e => Resumed> emit_event /usr/lib/python2.7/site-packages/nova/virt/driver.py:1521 >2018-07-09 14:18:07.892 1 INFO nova.compute.manager [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] VM Resumed (Lifecycle Event) >2018-07-09 14:18:07.900 1 INFO nova.virt.libvirt.driver [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Instance spawned successfully. >2018-07-09 14:18:07.900 1 INFO nova.compute.manager [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Took 1.41 seconds to spawn the instance on the hypervisor. >2018-07-09 14:18:07.901 1 DEBUG nova.compute.manager [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Checking state _get_power_state /usr/lib/python2.7/site-packages/nova/compute/manager.py:1166 >2018-07-09 14:18:07.937 1 DEBUG nova.compute.manager [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Checking state _get_power_state /usr/lib/python2.7/site-packages/nova/compute/manager.py:1166 >2018-07-09 14:18:07.948 1 DEBUG nova.compute.manager [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python2.7/site-packages/nova/compute/manager.py:1076 >2018-07-09 14:18:08.002 1 INFO nova.compute.manager [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] During sync_power_state the instance has a pending task (spawning). Skip. >2018-07-09 14:18:08.002 1 DEBUG nova.virt.driver [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Emitting event <LifecycleEvent: 1531145887.89, 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e => Started> emit_event /usr/lib/python2.7/site-packages/nova/virt/driver.py:1521 >2018-07-09 14:18:08.003 1 INFO nova.compute.manager [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] VM Started (Lifecycle Event) >2018-07-09 14:18:08.018 1 INFO nova.compute.manager [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Took 3.10 seconds to build instance. >2018-07-09 14:18:08.049 1 DEBUG oslo_concurrency.lockutils [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" released by "nova.compute.manager._locked_do_build_and_run_instance" :: held 3.285s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:18:08.079 1 DEBUG nova.compute.manager [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Checking state _get_power_state /usr/lib/python2.7/site-packages/nova/compute/manager.py:1166 >2018-07-09 14:18:08.084 1 DEBUG nova.compute.manager [req-5c0b3e3a-fb3b-451c-ac06-5b75f61fe0fa c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python2.7/site-packages/nova/compute/manager.py:1076 >2018-07-09 14:18:14.715 1 DEBUG oslo_concurrency.lockutils [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" acquired by "nova.compute.manager.do_reserve" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:18:14.735 1 DEBUG nova.block_device [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] block_device_list [] volume_in_mapping /usr/lib/python2.7/site-packages/nova/block_device.py:575 >2018-07-09 14:18:14.735 1 DEBUG nova.objects.instance [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lazy-loading 'flavor' on Instance uuid 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:18:14.798 1 DEBUG oslo_concurrency.lockutils [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" released by "nova.compute.manager.do_reserve" :: held 0.083s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:18:14.983 1 DEBUG oslo_concurrency.lockutils [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" acquired by "nova.compute.manager.do_attach_volume" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:18:14.983 1 INFO nova.compute.manager [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Attaching volume b3a1989b-b3c8-4570-a51d-574524a409da to /dev/vdb >2018-07-09 14:18:15.018 1 DEBUG cinderclient.v3.client [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] REQ: curl -g -i -X GET http://172.17.1.10:8776/v3/c00c33e0c4644d3694a3a96dc897e47a/volumes/b3a1989b-b3c8-4570-a51d-574524a409da -H "Accept: application/json" -H "OpenStack-API-Version: volume 3.48" -H "User-Agent: python-cinderclient" -H "X-Auth-Token: {SHA1}a5d84b86c002fd3388fe236e5bf7b9f9d1ae62db" -H "X-OpenStack-Request-ID: req-b2e6d797-a712-45c6-b1c3-024b1ac3520b" _http_log_request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:448 >2018-07-09 14:18:15.119 1 DEBUG cinderclient.v3.client [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] RESP: [200] Content-Encoding: gzip Content-Length: 505 Content-Type: application/json Date: Mon, 09 Jul 2018 14:18:15 GMT OpenStack-API-Version: volume 3.48 Server: Apache Vary: OpenStack-API-Version,Accept-Encoding x-compute-request-id: req-0e650731-f116-4574-af28-782a5c0ea69a x-openstack-request-id: req-0e650731-f116-4574-af28-782a5c0ea69a _http_log_response /usr/lib/python2.7/site-packages/keystoneauth1/session.py:479 >2018-07-09 14:18:15.120 1 DEBUG cinderclient.v3.client [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] RESP BODY: {"volume": {"attachments": [], "links": [{"href": "http://172.17.1.10:8776/v3/c00c33e0c4644d3694a3a96dc897e47a/volumes/b3a1989b-b3c8-4570-a51d-574524a409da", "rel": "self"}, {"href": "http://172.17.1.10:8776/c00c33e0c4644d3694a3a96dc897e47a/volumes/b3a1989b-b3c8-4570-a51d-574524a409da", "rel": "bookmark"}], "availability_zone": "nova", "encrypted": false, "updated_at": "2018-07-09T14:18:14.000000", "replication_status": null, "snapshot_id": null, "id": "b3a1989b-b3c8-4570-a51d-574524a409da", "size": 1, "user_id": "c63d372255b547688c3f4ebac60ee839", "os-vol-tenant-attr:tenant_id": "c00c33e0c4644d3694a3a96dc897e47a", "metadata": {}, "status": "reserved", "description": null, "multiattach": false, "service_uuid": "b9156d9d-46d2-408a-8602-b15a0593a288", "source_volid": null, "consistencygroup_id": null, "name": "tempest-VolumesAdminNegativeTest-volume-788599059", "bootable": "false", "shared_targets": true, "volume_type": null, "group_id": null, "created_at": "2018-07-09T14:18:11.000000"}} _http_log_response /usr/lib/python2.7/site-packages/keystoneauth1/session.py:511 >2018-07-09 14:18:15.120 1 DEBUG cinderclient.v3.client [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] GET call to cinderv3 for http://172.17.1.10:8776/v3/c00c33e0c4644d3694a3a96dc897e47a/volumes/b3a1989b-b3c8-4570-a51d-574524a409da used request id req-0e650731-f116-4574-af28-782a5c0ea69a request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:844 >2018-07-09 14:18:15.121 1 DEBUG oslo_concurrency.lockutils [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "b9156d9d-46d2-408a-8602-b15a0593a288" acquired by "nova.virt.block_device._do_locked_attach" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:18:15.122 1 DEBUG os_brick.utils [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] ==> get_connector_properties: call u"{'execute': None, 'my_ip': '172.17.1.13', 'enforce_multipath': True, 'host': 'compute-0.localdomain', 'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'multipath': False}" trace_logging_wrapper /usr/lib/python2.7/site-packages/os_brick/utils.py:146 >2018-07-09 14:18:15.123 1 INFO oslo.privsep.daemon [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/usr/share/nova/nova-dist.conf', '--config-file', '/etc/nova/nova.conf', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp9RKWMf/privsep.sock'] >2018-07-09 14:18:15.794 1 INFO oslo.privsep.daemon [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Spawned new privsep daemon via rootwrap >2018-07-09 14:18:15.795 1 DEBUG oslo.privsep.daemon [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Accepted privsep connection to /tmp/tmp9RKWMf/privsep.sock __init__ /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:331 >2018-07-09 14:18:15.728 732 INFO oslo.privsep.daemon [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] privsep daemon starting >2018-07-09 14:18:15.733 732 INFO oslo.privsep.daemon [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] privsep process running with uid/gid: 0/0 >2018-07-09 14:18:15.735 732 INFO oslo.privsep.daemon [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none >2018-07-09 14:18:15.735 732 INFO oslo.privsep.daemon [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] privsep daemon running as pid 732 >2018-07-09 14:18:15.798 732 DEBUG oslo.privsep.daemon [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] privsep: request[140039496816592]: (1,) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443 >2018-07-09 14:18:15.798 732 DEBUG oslo.privsep.daemon [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] privsep: reply[140039496816592]: (2,) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456 >2018-07-09 14:18:15.800 732 DEBUG oslo.privsep.daemon [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] privsep: request[140039496816592]: (3, 'os_brick.privileged.rootwrap.execute_root', ('cat', '/etc/iscsi/initiatorname.iscsi'), {}) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443 >2018-07-09 14:18:15.878 732 DEBUG oslo_concurrency.processutils [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372 >2018-07-09 14:18:15.882 732 DEBUG oslo_concurrency.processutils [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.004s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409 >2018-07-09 14:18:15.883 732 DEBUG oslo.privsep.daemon [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] privsep: reply[140039496816592]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ffa7a367add0\n', '')) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456 >2018-07-09 14:18:15.884 1 DEBUG os_brick.initiator.linuxfc [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] No Fibre Channel support detected on system. get_fc_hbas /usr/lib/python2.7/site-packages/os_brick/initiator/linuxfc.py:134 >2018-07-09 14:18:15.884 1 DEBUG os_brick.initiator.linuxfc [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] No Fibre Channel support detected on system. get_fc_hbas /usr/lib/python2.7/site-packages/os_brick/initiator/linuxfc.py:134 >2018-07-09 14:18:15.885 1 DEBUG os_brick.utils [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] <== get_connector_properties: return (762ms) {'initiator': u'iqn.1994-05.com.redhat:ffa7a367add0', 'ip': u'172.17.1.13', 'platform': u'x86_64', 'host': u'compute-0.localdomain', 'do_local_attach': False, 'os_type': u'linux2', 'multipath': False} trace_logging_wrapper /usr/lib/python2.7/site-packages/os_brick/utils.py:170 >2018-07-09 14:18:15.886 1 DEBUG nova.virt.block_device [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updating existing volume attachment record: dee85738-26ea-4fc6-9c59-f108b7608ce6 _volume_attach /usr/lib/python2.7/site-packages/nova/virt/block_device.py:526 >2018-07-09 14:18:15.889 1 DEBUG cinderclient.v3.client [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] REQ: curl -g -i -X PUT http://172.17.1.10:8776/v3/c00c33e0c4644d3694a3a96dc897e47a/attachments/dee85738-26ea-4fc6-9c59-f108b7608ce6 -H "Accept: application/json" -H "Content-Type: application/json" -H "OpenStack-API-Version: volume 3.44" -H "User-Agent: python-cinderclient" -H "X-Auth-Token: {SHA1}a5d84b86c002fd3388fe236e5bf7b9f9d1ae62db" -H "X-OpenStack-Request-ID: req-b2e6d797-a712-45c6-b1c3-024b1ac3520b" -d '{"attachment": {"connector": {"initiator": "iqn.1994-05.com.redhat:ffa7a367add0", "ip": "172.17.1.13", "platform": "x86_64", "host": "compute-0.localdomain", "do_local_attach": false, "mountpoint": "/dev/vdb", "os_type": "linux2", "multipath": false}}}' _http_log_request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:448 >2018-07-09 14:18:16.976 1 DEBUG cinderclient.v3.client [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] RESP: [200] Content-Encoding: gzip Content-Length: 392 Content-Type: application/json Date: Mon, 09 Jul 2018 14:18:15 GMT OpenStack-API-Version: volume 3.44 Server: Apache Vary: OpenStack-API-Version,Accept-Encoding x-compute-request-id: req-5f4c8628-2a88-42a5-9169-e8261b07bb32 x-openstack-request-id: req-5f4c8628-2a88-42a5-9169-e8261b07bb32 _http_log_response /usr/lib/python2.7/site-packages/keystoneauth1/session.py:479 >2018-07-09 14:18:16.977 1 DEBUG cinderclient.v3.client [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] RESP BODY: {"attachment": {"status": "reserved", "detached_at": "", "connection_info": {"attachment_id": "dee85738-26ea-4fc6-9c59-f108b7608ce6", "encrypted": false, "driver_volume_type": "rbd", "secret_uuid": "a7f60d80-8032-11e8-8802-52540031427e", "qos_specs": null, "volume_id": "b3a1989b-b3c8-4570-a51d-574524a409da", "auth_username": "openstack", "secret_type": "ceph", "name": "volumes/volume-b3a1989b-b3c8-4570-a51d-574524a409da", "discard": true, "keyring": null, "cluster_name": "ceph", "auth_enabled": true, "hosts": ["172.17.3.22"], "access_mode": "rw", "ports": ["6789"]}, "attached_at": "", "attach_mode": null, "instance": "6b44d4d1-7d58-4b81-b6ba-7e773bffa94e", "volume_id": "b3a1989b-b3c8-4570-a51d-574524a409da", "id": "dee85738-26ea-4fc6-9c59-f108b7608ce6"}} _http_log_response /usr/lib/python2.7/site-packages/keystoneauth1/session.py:511 >2018-07-09 14:18:16.977 1 DEBUG cinderclient.v3.client [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] PUT call to cinderv3 for http://172.17.1.10:8776/v3/c00c33e0c4644d3694a3a96dc897e47a/attachments/dee85738-26ea-4fc6-9c59-f108b7608ce6 used request id req-5f4c8628-2a88-42a5-9169-e8261b07bb32 request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:844 >2018-07-09 14:18:16.985 1 DEBUG nova.virt.libvirt.driver [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Attempting to attach volume b3a1989b-b3c8-4570-a51d-574524a409da with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:1393 >2018-07-09 14:18:16.987 1 DEBUG nova.virt.libvirt.guest [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] attach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-b3a1989b-b3c8-4570-a51d-574524a409da"> > <host name="172.17.3.22" port="6789"/> > </source> > <auth username="openstack"> > <secret type="ceph" uuid="a7f60d80-8032-11e8-8802-52540031427e"/> > </auth> > <target bus="virtio" dev="vdb"/> > <serial>b3a1989b-b3c8-4570-a51d-574524a409da</serial> ></disk> > attach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:302 >2018-07-09 14:18:17.123 1 DEBUG nova.virt.libvirt.driver [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:8687 >2018-07-09 14:18:17.123 1 DEBUG nova.virt.libvirt.driver [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:8687 >2018-07-09 14:18:17.167 1 DEBUG cinderclient.v3.client [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] REQ: curl -g -i -X POST http://172.17.1.10:8776/v3/c00c33e0c4644d3694a3a96dc897e47a/attachments/dee85738-26ea-4fc6-9c59-f108b7608ce6/action -H "Accept: application/json" -H "Content-Type: application/json" -H "OpenStack-API-Version: volume 3.44" -H "User-Agent: python-cinderclient" -H "X-Auth-Token: {SHA1}a5d84b86c002fd3388fe236e5bf7b9f9d1ae62db" -H "X-OpenStack-Request-ID: req-b2e6d797-a712-45c6-b1c3-024b1ac3520b" -d '{"os-complete": null}' _http_log_request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:448 >2018-07-09 14:18:17.243 1 DEBUG cinderclient.v3.client [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] RESP: [204] Content-Type: application/json Date: Mon, 09 Jul 2018 14:18:17 GMT OpenStack-API-Version: volume 3.44 Server: Apache Vary: OpenStack-API-Version x-compute-request-id: req-706d3a21-372b-440b-92cf-1aa00530b277 x-openstack-request-id: req-706d3a21-372b-440b-92cf-1aa00530b277 _http_log_response /usr/lib/python2.7/site-packages/keystoneauth1/session.py:479 >2018-07-09 14:18:17.243 1 DEBUG cinderclient.v3.client [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] POST call to cinderv3 for http://172.17.1.10:8776/v3/c00c33e0c4644d3694a3a96dc897e47a/attachments/dee85738-26ea-4fc6-9c59-f108b7608ce6/action used request id req-706d3a21-372b-440b-92cf-1aa00530b277 request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:844 >2018-07-09 14:18:17.244 1 DEBUG oslo_concurrency.lockutils [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "b9156d9d-46d2-408a-8602-b15a0593a288" released by "nova.virt.block_device._do_locked_attach" :: held 2.123s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:18:17.262 1 DEBUG nova.objects.instance [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lazy-loading 'flavor' on Instance uuid 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:18:17.310 1 DEBUG oslo_concurrency.lockutils [req-b2e6d797-a712-45c6-b1c3-024b1ac3520b c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" released by "nova.compute.manager.do_attach_volume" :: held 2.327s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:18:20.638 1 INFO nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Detaching volume b3a1989b-b3c8-4570-a51d-574524a409da >2018-07-09 14:18:20.650 1 DEBUG cinderclient.v3.client [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] REQ: curl -g -i -X GET http://172.17.1.10:8776/v3/c00c33e0c4644d3694a3a96dc897e47a/volumes/b3a1989b-b3c8-4570-a51d-574524a409da -H "Accept: application/json" -H "OpenStack-API-Version: volume 3.48" -H "User-Agent: python-cinderclient" -H "X-Auth-Token: {SHA1}a5d84b86c002fd3388fe236e5bf7b9f9d1ae62db" -H "X-OpenStack-Request-ID: req-90ce45bf-6668-4426-9560-d5690eb9d248" _http_log_request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:448 >2018-07-09 14:18:20.743 1 DEBUG cinderclient.v3.client [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] RESP: [200] Content-Encoding: gzip Content-Length: 624 Content-Type: application/json Date: Mon, 09 Jul 2018 14:18:20 GMT OpenStack-API-Version: volume 3.48 Server: Apache Vary: OpenStack-API-Version,Accept-Encoding x-compute-request-id: req-9f5bb823-34b6-4a7b-be65-57feed3332d9 x-openstack-request-id: req-9f5bb823-34b6-4a7b-be65-57feed3332d9 _http_log_response /usr/lib/python2.7/site-packages/keystoneauth1/session.py:479 >2018-07-09 14:18:20.744 1 DEBUG cinderclient.v3.client [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] RESP BODY: {"volume": {"attachments": [{"server_id": "6b44d4d1-7d58-4b81-b6ba-7e773bffa94e", "attachment_id": "dee85738-26ea-4fc6-9c59-f108b7608ce6", "attached_at": "2018-07-09T14:18:16.000000", "host_name": "compute-0.localdomain", "volume_id": "b3a1989b-b3c8-4570-a51d-574524a409da", "device": "/dev/vdb", "id": "b3a1989b-b3c8-4570-a51d-574524a409da"}], "links": [{"href": "http://172.17.1.10:8776/v3/c00c33e0c4644d3694a3a96dc897e47a/volumes/b3a1989b-b3c8-4570-a51d-574524a409da", "rel": "self"}, {"href": "http://172.17.1.10:8776/c00c33e0c4644d3694a3a96dc897e47a/volumes/b3a1989b-b3c8-4570-a51d-574524a409da", "rel": "bookmark"}], "availability_zone": "nova", "encrypted": false, "updated_at": "2018-07-09T14:18:20.000000", "replication_status": null, "snapshot_id": null, "id": "b3a1989b-b3c8-4570-a51d-574524a409da", "size": 1, "user_id": "c63d372255b547688c3f4ebac60ee839", "os-vol-tenant-attr:tenant_id": "c00c33e0c4644d3694a3a96dc897e47a", "metadata": {"attached_mode": "rw"}, "status": "detaching", "description": null, "multiattach": false, "service_uuid": "b9156d9d-46d2-408a-8602-b15a0593a288", "source_volid": null, "consistencygroup_id": null, "name": "tempest-VolumesAdminNegativeTest-volume-788599059", "bootable": "false", "shared_targets": true, "volume_type": null, "group_id": null, "created_at": "2018-07-09T14:18:11.000000"}} _http_log_response /usr/lib/python2.7/site-packages/keystoneauth1/session.py:511 >2018-07-09 14:18:20.744 1 DEBUG cinderclient.v3.client [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] GET call to cinderv3 for http://172.17.1.10:8776/v3/c00c33e0c4644d3694a3a96dc897e47a/volumes/b3a1989b-b3c8-4570-a51d-574524a409da used request id req-9f5bb823-34b6-4a7b-be65-57feed3332d9 request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:844 >2018-07-09 14:18:20.745 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "b9156d9d-46d2-408a-8602-b15a0593a288" acquired by "nova.virt.block_device._do_locked_detach" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:18:20.746 1 INFO nova.virt.block_device [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Attempting to driver detach volume b3a1989b-b3c8-4570-a51d-574524a409da from mountpoint /dev/vdb >2018-07-09 14:18:20.751 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Attempting initial detach for device vdb detach_device_with_retry /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:426 >2018-07-09 14:18:20.752 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] detach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-b3a1989b-b3c8-4570-a51d-574524a409da"> > <host name="172.17.3.22" port="6789"/> > </source> > <target bus="virtio" dev="vdb"/> > <serial>b3a1989b-b3c8-4570-a51d-574524a409da</serial> > <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/> ></disk> > detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:477 >2018-07-09 14:18:25.766 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Successfully detached device vdb from guest. Persistent? 1. Live? True _try_detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:400 >2018-07-09 14:18:25.766 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Start retrying detach until device vdb is gone. detach_device_with_retry /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:442 >2018-07-09 14:18:25.767 1 DEBUG oslo.service.loopingcall [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Waiting for function nova.virt.libvirt.guest._do_wait_and_retry_detach to return. func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:478 >2018-07-09 14:18:25.769 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] detach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-b3a1989b-b3c8-4570-a51d-574524a409da"> > <host name="172.17.3.22" port="6789"/> > </source> > <target bus="virtio" dev="vdb"/> > <serial>b3a1989b-b3c8-4570-a51d-574524a409da</serial> > <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/> ></disk> > detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:477 >2018-07-09 14:18:30.776 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Successfully detached device vdb from guest. Persistent? False. Live? True _try_detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:400 >2018-07-09 14:18:30.776 1 DEBUG oslo.service.loopingcall [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Exception which is in the suggested list of exceptions occurred while invoking function: nova.virt.libvirt.guest._do_wait_and_retry_detach. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:456 >2018-07-09 14:18:32.777 1 DEBUG oslo.service.loopingcall [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Invoking nova.virt.libvirt.guest._do_wait_and_retry_detach; retry count is 1. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:449 >2018-07-09 14:18:32.779 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] detach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-b3a1989b-b3c8-4570-a51d-574524a409da"> > <host name="172.17.3.22" port="6789"/> > </source> > <target bus="virtio" dev="vdb"/> > <serial>b3a1989b-b3c8-4570-a51d-574524a409da</serial> > <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/> ></disk> > detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:477 >2018-07-09 14:18:37.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_bandwidth_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:18:37.293 1 INFO nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Updating bandwidth usage cache >2018-07-09 14:18:37.335 1 INFO nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Bandwidth usage not supported by libvirt.LibvirtDriver. >2018-07-09 14:18:37.786 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Successfully detached device vdb from guest. Persistent? False. Live? True _try_detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:400 >2018-07-09 14:18:37.786 1 DEBUG oslo.service.loopingcall [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Exception which is in the suggested list of exceptions occurred while invoking function: nova.virt.libvirt.guest._do_wait_and_retry_detach. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:456 >2018-07-09 14:18:38.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:18:38.293 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7908 >2018-07-09 14:18:41.308 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:18:41.787 1 DEBUG oslo.service.loopingcall [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Invoking nova.virt.libvirt.guest._do_wait_and_retry_detach; retry count is 2. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:449 >2018-07-09 14:18:41.789 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] detach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-b3a1989b-b3c8-4570-a51d-574524a409da"> > <host name="172.17.3.22" port="6789"/> > </source> > <target bus="virtio" dev="vdb"/> > <serial>b3a1989b-b3c8-4570-a51d-574524a409da</serial> > <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/> ></disk> > detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:477 >2018-07-09 14:18:42.305 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:18:44.288 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:18:44.289 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:18:44.317 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:18:44.317 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:18:45.291 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:18:45.314 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:18:45.419 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:18:45.420 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:18:45.463 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4960MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:18:45.464 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:18:45.554 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:18:45.573 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:18:45.685 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:18:45.736 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Instance 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:18:45.737 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:18:45.737 1 INFO nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:18:45.822 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:18:45.841 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:18:45.924 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:18:45.925 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.461s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:18:45.925 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:18:45.926 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:18:45.926 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:18:45.945 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Acquired semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:18:45.946 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:18:45.946 1 DEBUG nova.objects.instance [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lazy-loading 'info_cache' on Instance uuid 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:18:46.796 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Successfully detached device vdb from guest. Persistent? False. Live? True _try_detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:400 >2018-07-09 14:18:46.797 1 DEBUG oslo.service.loopingcall [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Exception which is in the suggested list of exceptions occurred while invoking function: nova.virt.libvirt.guest._do_wait_and_retry_detach. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:456 >2018-07-09 14:18:46.898 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:18:46.955 1 DEBUG nova.network.base_api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:18:46.971 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Releasing semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:18:46.971 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:18:47.338 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:18:47.339 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:18:47.340 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:18:48.294 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:18:49.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:18:49.292 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7865 >2018-07-09 14:18:49.310 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7874 >2018-07-09 14:18:52.798 1 DEBUG oslo.service.loopingcall [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Invoking nova.virt.libvirt.guest._do_wait_and_retry_detach; retry count is 3. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:449 >2018-07-09 14:18:52.800 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] detach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-b3a1989b-b3c8-4570-a51d-574524a409da"> > <host name="172.17.3.22" port="6789"/> > </source> > <target bus="virtio" dev="vdb"/> > <serial>b3a1989b-b3c8-4570-a51d-574524a409da</serial> > <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/> ></disk> > detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:477 >2018-07-09 14:18:57.806 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Successfully detached device vdb from guest. Persistent? False. Live? True _try_detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:400 >2018-07-09 14:18:57.806 1 DEBUG oslo.service.loopingcall [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Exception which is in the suggested list of exceptions occurred while invoking function: nova.virt.libvirt.guest._do_wait_and_retry_detach. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:456 >2018-07-09 14:19:05.807 1 DEBUG oslo.service.loopingcall [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Invoking nova.virt.libvirt.guest._do_wait_and_retry_detach; retry count is 4. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:449 >2018-07-09 14:19:05.810 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] detach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-b3a1989b-b3c8-4570-a51d-574524a409da"> > <host name="172.17.3.22" port="6789"/> > </source> > <target bus="virtio" dev="vdb"/> > <serial>b3a1989b-b3c8-4570-a51d-574524a409da</serial> > <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/> ></disk> > detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:477 >2018-07-09 14:19:10.816 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Successfully detached device vdb from guest. Persistent? False. Live? True _try_detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:400 >2018-07-09 14:19:10.816 1 DEBUG oslo.service.loopingcall [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Exception which is in the suggested list of exceptions occurred while invoking function: nova.virt.libvirt.guest._do_wait_and_retry_detach. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:456 >2018-07-09 14:19:20.817 1 DEBUG oslo.service.loopingcall [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Invoking nova.virt.libvirt.guest._do_wait_and_retry_detach; retry count is 5. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:449 >2018-07-09 14:19:20.820 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] detach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-b3a1989b-b3c8-4570-a51d-574524a409da"> > <host name="172.17.3.22" port="6789"/> > </source> > <target bus="virtio" dev="vdb"/> > <serial>b3a1989b-b3c8-4570-a51d-574524a409da</serial> > <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/> ></disk> > detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:477 >2018-07-09 14:19:25.827 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Successfully detached device vdb from guest. Persistent? False. Live? True _try_detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:400 >2018-07-09 14:19:25.828 1 DEBUG oslo.service.loopingcall [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Exception which is in the suggested list of exceptions occurred while invoking function: nova.virt.libvirt.guest._do_wait_and_retry_detach. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:456 >2018-07-09 14:19:37.828 1 DEBUG oslo.service.loopingcall [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Invoking nova.virt.libvirt.guest._do_wait_and_retry_detach; retry count is 6. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:449 >2018-07-09 14:19:37.831 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] detach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-b3a1989b-b3c8-4570-a51d-574524a409da"> > <host name="172.17.3.22" port="6789"/> > </source> > <target bus="virtio" dev="vdb"/> > <serial>b3a1989b-b3c8-4570-a51d-574524a409da</serial> > <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/> ></disk> > detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:477 >2018-07-09 14:19:42.837 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Successfully detached device vdb from guest. Persistent? False. Live? True _try_detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:400 >2018-07-09 14:19:42.837 1 DEBUG oslo.service.loopingcall [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Exception which is in the suggested list of exceptions occurred while invoking function: nova.virt.libvirt.guest._do_wait_and_retry_detach. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:456 >2018-07-09 14:19:43.310 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:19:44.288 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:19:44.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:19:44.293 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:19:46.291 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:19:46.292 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:19:46.292 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:19:46.313 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Acquired semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:19:46.314 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:19:46.314 1 DEBUG nova.objects.instance [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lazy-loading 'info_cache' on Instance uuid 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:19:46.691 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:19:46.751 1 DEBUG nova.network.base_api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:19:46.766 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Releasing semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:19:46.767 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:19:47.768 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:19:47.793 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:19:47.922 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:19:47.922 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:19:47.975 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4959MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:19:47.975 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:19:48.088 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:19:48.112 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:19:48.262 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:19:48.315 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Instance 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:19:48.316 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:19:48.316 1 INFO nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:19:48.405 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:19:48.423 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:19:48.509 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:19:48.509 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.534s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:19:48.510 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:19:50.035 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:19:50.035 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:19:50.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:19:56.838 1 DEBUG oslo.service.loopingcall [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Invoking nova.virt.libvirt.guest._do_wait_and_retry_detach; retry count is 7. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:449 >2018-07-09 14:19:56.840 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] detach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-b3a1989b-b3c8-4570-a51d-574524a409da"> > <host name="172.17.3.22" port="6789"/> > </source> > <target bus="virtio" dev="vdb"/> > <serial>b3a1989b-b3c8-4570-a51d-574524a409da</serial> > <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/> ></disk> > detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:477 >2018-07-09 14:20:01.848 1 DEBUG nova.virt.libvirt.guest [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Successfully detached device vdb from guest. Persistent? False. Live? True _try_detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:400 >2018-07-09 14:20:01.848 1 DEBUG oslo.service.loopingcall [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Exception which is in the suggested list of exceptions occurred while invoking function: nova.virt.libvirt.guest._do_wait_and_retry_detach. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:456 >2018-07-09 14:20:01.849 1 DEBUG oslo.service.loopingcall [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Cannot retry nova.virt.libvirt.guest._do_wait_and_retry_detach upon suggested exception since retry count (7) reached max retry count (7). _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:466 >2018-07-09 14:20:01.849 1 ERROR oslo.service.loopingcall [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Dynamic interval looping call 'oslo_service.loopingcall._func' failed: DeviceDetachFailed: Device detach failed for vdb: Unable to detach from guest transient domain. >2018-07-09 14:20:01.849 1 ERROR oslo.service.loopingcall Traceback (most recent call last): >2018-07-09 14:20:01.849 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 193, in _run_loop >2018-07-09 14:20:01.849 1 ERROR oslo.service.loopingcall result = func(*self.args, **self.kw) >2018-07-09 14:20:01.849 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 471, in _func >2018-07-09 14:20:01.849 1 ERROR oslo.service.loopingcall return self._sleep_time >2018-07-09 14:20:01.849 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ >2018-07-09 14:20:01.849 1 ERROR oslo.service.loopingcall self.force_reraise() >2018-07-09 14:20:01.849 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise >2018-07-09 14:20:01.849 1 ERROR oslo.service.loopingcall six.reraise(self.type_, self.value, self.tb) >2018-07-09 14:20:01.849 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 450, in _func >2018-07-09 14:20:01.849 1 ERROR oslo.service.loopingcall result = f(*args, **kwargs) >2018-07-09 14:20:01.849 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 457, in _do_wait_and_retry_detach >2018-07-09 14:20:01.849 1 ERROR oslo.service.loopingcall device=alternative_device_name, reason=reason) >2018-07-09 14:20:01.849 1 ERROR oslo.service.loopingcall DeviceDetachFailed: Device detach failed for vdb: Unable to detach from guest transient domain. >2018-07-09 14:20:01.849 1 ERROR oslo.service.loopingcall >2018-07-09 14:20:01.852 1 WARNING nova.virt.block_device [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Guest refused to detach volume b3a1989b-b3c8-4570-a51d-574524a409da: DeviceDetachFailed: Device detach failed for vdb: Unable to detach from guest transient domain. >2018-07-09 14:20:01.852 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "b9156d9d-46d2-408a-8602-b15a0593a288" released by "nova.virt.block_device._do_locked_detach" :: held 101.107s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Exception during message handling: DeviceDetachFailed: Device detach failed for vdb: Unable to detach from guest transient domain. >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server Traceback (most recent call last): >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 265, in dispatch >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 194, in _do_dispatch >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 79, in wrapped >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server function_name, call_dict, binary, tb) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server self.force_reraise() >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 69, in wrapped >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/compute/utils.py", line 1085, in decorated_function >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 213, in decorated_function >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server kwargs['instance'], e, sys.exc_info()) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server self.force_reraise() >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 201, in decorated_function >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5497, in detach_volume >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server attachment_id=attachment_id) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5450, in _detach_volume >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server attachment_id=attachment_id, destroy_bdm=destroy_bdm) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 429, in detach >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server attachment_id, destroy_bdm) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server return f(*args, **kwargs) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 426, in _do_locked_detach >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server self._do_detach(*args, **_kwargs) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 355, in _do_detach >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server self.driver_detach(context, instance, volume_api, virt_driver) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 324, in driver_detach >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server {'vol': volume_id}, instance=instance) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server self.force_reraise() >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 314, in driver_detach >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server encryption=encryption) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1594, in detach_volume >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server wait_for_detach() >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 479, in func >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server return evt.wait() >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server return hubs.get_hub().switch() >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, in switch >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server return self.greenlet.switch() >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 193, in _run_loop >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server result = func(*self.args, **self.kw) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 471, in _func >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server return self._sleep_time >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server self.force_reraise() >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 450, in _func >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server result = f(*args, **kwargs) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 457, in _do_wait_and_retry_detach >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server device=alternative_device_name, reason=reason) >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server DeviceDetachFailed: Device detach failed for vdb: Unable to detach from guest transient domain. >2018-07-09 14:20:01.905 1 ERROR oslo_messaging.rpc.server >2018-07-09 14:20:43.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:20:44.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:20:45.288 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:20:46.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:20:47.291 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:20:47.292 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:20:47.292 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:20:47.314 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Acquired semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:20:47.315 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:20:47.315 1 DEBUG nova.objects.instance [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lazy-loading 'info_cache' on Instance uuid 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:20:47.415 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:20:47.472 1 DEBUG nova.network.base_api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:20:47.488 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Releasing semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:20:47.489 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:20:48.490 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:20:48.511 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:20:48.613 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:20:48.614 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:20:48.658 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4967MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:20:48.658 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:20:49.057 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:20:49.075 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:20:49.193 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:20:49.249 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Instance 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:20:49.250 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:20:49.250 1 INFO nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:20:49.334 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:20:49.356 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:20:49.440 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:20:49.441 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.783s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:20:49.442 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:20:49.442 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:20:51.267 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:20:51.268 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:20:51.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:21:12.442 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:21:12.466 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Triggering sync for uuid 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e _sync_power_states /usr/lib/python2.7/site-packages/nova/compute/manager.py:7229 >2018-07-09 14:21:12.467 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" acquired by "nova.compute.manager.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:21:12.519 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" released by "nova.compute.manager.query_driver_power_state_and_sync" :: held 0.052s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:21:44.317 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:21:44.318 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:21:45.288 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:21:46.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:21:49.291 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:21:49.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:21:49.292 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:21:49.293 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:21:49.313 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Acquired semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:21:49.313 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:21:49.314 1 DEBUG nova.objects.instance [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lazy-loading 'info_cache' on Instance uuid 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:21:49.907 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:21:49.976 1 DEBUG nova.network.base_api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:21:49.992 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Releasing semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:21:49.993 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:21:50.994 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:21:51.017 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:21:51.121 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:21:51.122 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:21:51.168 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4968MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:21:51.168 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:21:51.262 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:21:51.283 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:21:51.400 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:21:51.451 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Instance 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:21:51.451 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:21:51.452 1 INFO nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:21:51.538 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:21:51.557 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:21:51.640 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:21:51.641 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.473s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:21:52.939 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:21:52.940 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:21:53.293 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:22:45.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:22:45.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:22:46.288 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:22:48.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:22:49.291 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:22:49.292 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:22:49.292 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:22:49.316 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Acquired semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:22:49.316 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:22:49.316 1 DEBUG nova.objects.instance [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lazy-loading 'info_cache' on Instance uuid 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:22:49.860 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:22:49.928 1 DEBUG nova.network.base_api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:22:49.947 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Releasing semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:22:49.947 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:22:51.948 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:22:51.969 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:22:52.068 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:22:52.068 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:22:52.113 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4968MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:22:52.113 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:22:52.203 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:22:52.224 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:22:52.335 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:22:52.385 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Instance 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:22:52.386 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:22:52.386 1 INFO nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:22:52.474 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:22:52.494 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:22:52.573 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:22:52.573 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.460s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:22:52.574 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:22:52.918 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:22:52.918 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:22:54.288 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:22:54.315 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:23:45.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:23:47.288 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:23:47.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:23:48.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:23:48.292 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7908 >2018-07-09 14:23:49.308 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:23:51.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:23:51.292 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:23:51.292 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:23:51.314 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Acquired semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:23:51.314 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:23:51.314 1 DEBUG nova.objects.instance [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lazy-loading 'info_cache' on Instance uuid 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:23:51.424 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:23:51.480 1 DEBUG nova.network.base_api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:23:51.497 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Releasing semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:23:51.497 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:23:52.498 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:23:52.498 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:23:52.499 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:23:52.520 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:23:52.620 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:23:52.621 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:23:52.665 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4969MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:23:52.666 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:23:52.750 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:23:52.769 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:23:52.886 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:23:52.937 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Instance 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:23:52.938 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:23:52.938 1 INFO nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:23:53.024 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:23:53.042 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:23:53.125 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:23:53.126 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.460s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:23:53.126 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:23:53.303 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:23:54.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:24:02.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:24:02.293 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7865 >2018-07-09 14:24:02.315 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7874 >2018-07-09 14:24:47.315 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:24:47.316 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:24:49.288 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:24:49.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:24:52.291 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:24:52.292 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:24:52.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:24:52.293 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:24:52.293 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:24:52.314 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Acquired semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:24:52.314 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:24:52.315 1 DEBUG nova.objects.instance [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lazy-loading 'info_cache' on Instance uuid 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:24:52.416 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:24:52.475 1 DEBUG nova.network.base_api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:24:52.491 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Releasing semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:24:52.492 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:24:53.492 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:24:53.513 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:24:53.614 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:24:53.615 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:24:53.660 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4958MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:24:53.661 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:24:53.747 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:24:53.766 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:24:53.874 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:24:53.925 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Instance 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:24:53.926 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:24:53.926 1 INFO nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:24:54.037 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:24:54.057 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:24:54.135 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:24:54.136 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.475s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:24:54.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:24:56.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:24:57.289 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:25:48.316 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:25:48.317 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:25:49.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:25:50.288 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:25:53.291 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:25:53.292 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:25:53.292 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:25:53.314 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Acquired semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:25:53.314 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:25:53.315 1 DEBUG nova.objects.instance [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lazy-loading 'info_cache' on Instance uuid 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:25:53.707 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:25:53.766 1 DEBUG nova.network.base_api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:25:53.782 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Releasing semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:25:53.782 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:25:54.783 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:25:54.784 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:25:54.785 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:25:54.810 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:25:54.906 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:25:54.906 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:25:54.954 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4958MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:25:54.955 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:25:55.326 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:25:55.344 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:25:55.452 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:25:55.498 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Instance 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:25:55.499 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:25:55.499 1 INFO nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:25:55.583 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:25:55.601 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:25:55.677 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:25:55.677 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.723s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:25:56.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:25:58.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:26:48.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:26:48.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:26:50.288 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:26:50.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:26:53.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:26:53.292 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:26:53.292 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:26:53.314 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Acquired semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:26:53.315 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:26:53.315 1 DEBUG nova.objects.instance [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lazy-loading 'info_cache' on Instance uuid 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:26:53.715 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:26:53.777 1 DEBUG nova.network.base_api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:26:53.794 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Releasing semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:26:53.794 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:26:56.795 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:26:56.795 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:26:56.796 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:26:56.818 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:26:56.913 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:26:56.914 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:26:56.962 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4959MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:26:56.963 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:26:57.053 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:26:57.073 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:26:57.192 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:26:57.251 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Instance 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:26:57.252 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:26:57.252 1 INFO nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:26:57.341 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:26:57.362 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:26:57.448 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:26:57.448 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.486s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:26:57.449 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:26:58.313 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:26:58.328 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:27:48.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:27:49.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:27:50.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:27:51.288 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:27:55.292 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:27:55.293 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:27:55.293 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:27:55.314 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Acquired semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:27:55.315 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:27:55.315 1 DEBUG nova.objects.instance [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lazy-loading 'info_cache' on Instance uuid 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:27:55.415 1 DEBUG nova.network.neutronv2.api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:27:55.476 1 DEBUG nova.network.base_api [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:27:55.493 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Releasing semaphore "refresh_cache-6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:27:55.493 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:27:56.494 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:27:56.515 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:27:56.622 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:27:56.623 1 DEBUG nova.virt.libvirt.driver [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:27:56.674 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4959MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:27:56.675 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:27:56.766 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:27:56.785 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:27:56.906 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:27:56.957 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Instance 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:27:56.957 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:27:56.958 1 INFO nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:27:57.048 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:27:57.068 1 DEBUG nova.scheduler.client.report [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:27:57.158 1 DEBUG nova.compute.resource_tracker [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:27:57.159 1 DEBUG oslo_concurrency.lockutils [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.484s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:27:58.957 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:27:58.957 1 DEBUG nova.compute.manager [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:27:59.293 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:28:00.291 1 DEBUG oslo_service.periodic_task [req-90ce45bf-6668-4426-9560-d5690eb9d248 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:28:25.173 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" acquired by "nova.compute.manager.do_terminate_instance" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:28:25.174 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "6b44d4d1-7d58-4b81-b6ba-7e773bffa94e-events" acquired by "nova.compute.manager._clear_events" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:28:25.174 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "6b44d4d1-7d58-4b81-b6ba-7e773bffa94e-events" released by "nova.compute.manager._clear_events" :: held 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:28:25.177 1 INFO nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Terminating instance >2018-07-09 14:28:25.179 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python2.7/site-packages/nova/compute/manager.py:2353 >2018-07-09 14:28:25.389 1 INFO nova.virt.libvirt.driver [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Instance destroyed successfully. >2018-07-09 14:28:25.483 1 INFO nova.virt.libvirt.driver [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Deleting instance files /var/lib/nova/instances/6b44d4d1-7d58-4b81-b6ba-7e773bffa94e_del >2018-07-09 14:28:25.484 1 INFO nova.virt.libvirt.driver [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Deletion of /var/lib/nova/instances/6b44d4d1-7d58-4b81-b6ba-7e773bffa94e_del complete >2018-07-09 14:28:25.574 1 INFO nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Took 0.39 seconds to destroy the instance on the hypervisor. >2018-07-09 14:28:25.575 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Deallocating network for instance _deallocate_network /usr/lib/python2.7/site-packages/nova/compute/manager.py:1644 >2018-07-09 14:28:25.576 1 DEBUG nova.network.neutronv2.api [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] deallocate_for_instance() deallocate_for_instance /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1280 >2018-07-09 14:28:25.934 1 DEBUG nova.network.neutronv2.api [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:28:25.951 1 DEBUG nova.network.base_api [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:28:25.969 1 INFO nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Took 0.39 seconds to deallocate network for instance. >2018-07-09 14:28:25.973 1 DEBUG cinderclient.v3.client [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] REQ: curl -g -i -X DELETE http://172.17.1.10:8776/v3/c00c33e0c4644d3694a3a96dc897e47a/attachments/dee85738-26ea-4fc6-9c59-f108b7608ce6 -H "Accept: application/json" -H "OpenStack-API-Version: volume 3.44" -H "User-Agent: python-cinderclient" -H "X-Auth-Token: {SHA1}a5d84b86c002fd3388fe236e5bf7b9f9d1ae62db" -H "X-OpenStack-Request-ID: req-5f6eb002-af01-4aa7-897a-ab042686a1b6" _http_log_request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:448 >2018-07-09 14:28:26.551 1 DEBUG cinderclient.v3.client [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] RESP: [200] Content-Length: 19 Content-Type: application/json Date: Mon, 09 Jul 2018 14:28:25 GMT OpenStack-API-Version: volume 3.44 Server: Apache Vary: OpenStack-API-Version x-compute-request-id: req-145d2609-64c8-4f20-ba6b-956bf3169498 x-openstack-request-id: req-145d2609-64c8-4f20-ba6b-956bf3169498 _http_log_response /usr/lib/python2.7/site-packages/keystoneauth1/session.py:479 >2018-07-09 14:28:26.552 1 DEBUG cinderclient.v3.client [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] RESP BODY: {"attachments": []} _http_log_response /usr/lib/python2.7/site-packages/keystoneauth1/session.py:511 >2018-07-09 14:28:26.552 1 DEBUG cinderclient.v3.client [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] DELETE call to cinderv3 for http://172.17.1.10:8776/v3/c00c33e0c4644d3694a3a96dc897e47a/attachments/dee85738-26ea-4fc6-9c59-f108b7608ce6 used request id req-145d2609-64c8-4f20-ba6b-956bf3169498 request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:844 >2018-07-09 14:28:26.553 1 INFO nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Took 0.58 seconds to detach 1 volumes for instance. >2018-07-09 14:28:26.734 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.update_usage" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:28:26.831 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:28:26.859 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:28:26.951 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker.update_usage" :: held 0.217s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:28:26.976 1 INFO nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Deleted allocation for instance 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e >2018-07-09 14:28:26.993 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "6b44d4d1-7d58-4b81-b6ba-7e773bffa94e" released by "nova.compute.manager.do_terminate_instance" :: held 1.821s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:28:40.387 1 DEBUG nova.virt.driver [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Emitting event <LifecycleEvent: 1531146505.39, 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e => Stopped> emit_event /usr/lib/python2.7/site-packages/nova/virt/driver.py:1521 >2018-07-09 14:28:40.388 1 INFO nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] VM Stopped (Lifecycle Event) >2018-07-09 14:28:40.432 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] [instance: 6b44d4d1-7d58-4b81-b6ba-7e773bffa94e] Checking state _get_power_state /usr/lib/python2.7/site-packages/nova/compute/manager.py:1166 >2018-07-09 14:28:41.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_bandwidth_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:28:49.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:28:50.291 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:28:52.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:28:52.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:28:56.291 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:28:56.304 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:28:56.305 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:28:56.305 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:28:56.321 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:28:58.322 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:28:58.346 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:28:58.438 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=5011MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:28:58.439 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:28:58.536 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:28:58.556 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:28:58.718 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:28:58.719 1 INFO nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:28:58.807 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:28:58.825 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:28:58.910 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:28:58.911 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.472s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:28:59.881 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:28:59.881 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:29:00.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:29:00.312 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:29:01.291 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:29:01.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:29:01.293 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7908 >2018-07-09 14:29:09.307 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:29:09.308 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7865 >2018-07-09 14:29:09.327 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7874 >2018-07-09 14:29:50.312 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:29:50.312 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:29:52.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:29:52.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:29:57.291 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:29:57.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:29:57.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:29:57.315 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:29:59.316 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:29:59.317 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:29:59.317 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:29:59.341 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:29:59.444 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=5012MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:29:59.445 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:29:59.545 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:29:59.567 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:29:59.735 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:29:59.736 1 INFO nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:29:59.827 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:29:59.846 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:29:59.928 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:29:59.929 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.484s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:30:01.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:30:01.310 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:30:50.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:30:52.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:30:52.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:30:52.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:30:57.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:30:57.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:30:57.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:30:57.312 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:30:59.313 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:30:59.313 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:31:00.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:31:00.315 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:31:00.412 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=5012MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:31:00.413 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:31:01.396 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:31:01.417 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:31:01.583 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:31:01.584 1 INFO nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:31:01.686 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:31:01.709 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:31:01.791 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:31:01.792 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 1.379s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:31:02.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:31:02.309 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:31:03.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:31:41.463 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:31:50.312 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:31:52.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:31:52.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:31:54.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:31:58.291 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:31:58.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:31:58.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:31:58.311 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:32:01.311 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:32:01.312 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:32:01.312 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:32:01.335 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:32:01.430 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=5013MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:32:01.431 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:32:01.518 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:32:01.538 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:32:01.695 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:32:01.696 1 INFO nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:32:01.783 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:32:01.802 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:32:01.882 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:32:01.882 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.451s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:32:02.863 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:32:04.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:32:51.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:32:53.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:32:54.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:32:54.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:33:00.291 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:33:00.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:33:00.293 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:33:00.311 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:33:02.312 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:33:03.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:33:03.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:33:03.293 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:33:03.315 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:33:03.410 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=5012MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:33:03.411 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:33:03.497 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:33:03.516 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:33:03.673 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:33:03.673 1 INFO nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:33:03.758 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:33:03.776 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:33:03.865 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:33:03.865 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.455s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:33:04.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:33:07.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:33:53.311 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:33:54.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:33:54.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:33:55.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:33:59.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:34:02.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:34:02.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:34:02.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:34:02.310 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:34:03.311 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:34:03.333 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:34:03.431 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=5011MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:34:03.432 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:34:03.526 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:34:03.551 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:34:03.717 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:34:03.718 1 INFO nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:34:03.807 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:34:03.824 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:34:03.902 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:34:03.903 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.471s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:34:04.884 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:34:04.902 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:34:04.903 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:34:05.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:34:12.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:34:12.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7865 >2018-07-09 14:34:12.313 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7874 >2018-07-09 14:34:12.314 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:34:12.314 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7908 >2018-07-09 14:34:53.330 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:34:55.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:34:55.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:34:56.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:35:04.291 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:35:04.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:35:04.293 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:35:04.293 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:35:04.293 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:35:04.313 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:35:05.312 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:35:05.329 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:35:05.355 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:35:05.451 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=5012MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:35:05.452 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:35:05.546 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:35:05.563 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:35:05.719 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:35:05.720 1 INFO nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:35:05.808 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:35:05.829 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:35:05.908 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:35:05.908 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.457s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:35:05.909 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:35:09.289 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:35:53.311 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:35:55.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:35:57.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:35:58.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:36:05.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:36:05.293 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:36:05.293 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:36:05.312 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:36:06.312 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:36:06.313 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:36:06.314 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:36:06.335 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:36:06.443 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=5012MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:36:06.444 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:36:06.830 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:36:06.849 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:36:06.999 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:36:07.000 1 INFO nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:36:07.086 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:36:07.104 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:36:07.182 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:36:07.182 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.739s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:36:07.183 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:36:08.162 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:36:53.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:36:57.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:36:58.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:37:00.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:37:06.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:37:06.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:37:06.293 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:37:06.293 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:37:06.311 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:37:07.312 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:37:07.326 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:37:07.327 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:37:07.327 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:37:07.347 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:37:07.444 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=5011MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:37:07.445 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:37:07.540 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:37:07.558 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:37:07.709 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:37:07.710 1 INFO nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:37:07.798 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:37:07.816 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:37:07.892 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:37:07.892 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.448s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:37:09.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:37:55.311 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:37:58.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:37:59.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:38:01.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:38:06.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:38:08.291 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:38:08.309 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:38:08.310 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:38:08.310 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:38:08.311 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:38:08.311 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:38:08.328 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:38:09.309 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:38:09.331 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:38:09.427 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=5012MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:38:09.427 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:38:09.516 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:38:09.539 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:38:09.695 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:38:09.696 1 INFO nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:38:09.787 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:38:09.810 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:38:09.899 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:38:09.899 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.472s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:38:37.882 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:38:55.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:39:00.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:39:01.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:39:01.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:39:04.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:39:08.304 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:39:09.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:39:09.316 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:39:09.408 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=5012MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:39:09.409 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:39:09.500 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:39:09.520 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:39:09.678 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:39:09.679 1 INFO nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:39:09.770 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:39:09.790 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:39:09.887 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:39:09.887 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.479s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:39:09.888 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:39:09.888 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:39:09.889 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:39:09.907 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:39:10.311 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:39:10.327 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:39:10.327 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:39:10.328 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_bandwidth_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:39:14.289 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:39:16.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:39:16.293 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7908 >2018-07-09 14:39:23.308 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:39:23.309 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7865 >2018-07-09 14:39:23.326 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7874 >2018-07-09 14:39:56.310 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:40:01.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:40:01.293 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:40:02.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:40:09.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:40:11.291 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:40:11.308 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:40:11.308 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:40:11.309 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:40:11.329 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:40:11.426 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=5012MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:40:11.427 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:40:11.526 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:40:11.546 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:40:11.702 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:40:11.702 1 INFO nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:40:11.793 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:40:11.813 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:40:11.896 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:40:11.897 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.470s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:40:11.897 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:40:11.897 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:40:11.898 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:40:11.914 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:40:57.309 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:41:02.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:41:02.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:41:03.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:41:10.291 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:41:12.291 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:41:12.308 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:41:12.309 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:41:12.309 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:41:12.327 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:41:13.311 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:41:13.311 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:41:13.312 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:41:13.333 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:41:13.430 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=5012MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:41:13.431 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:41:13.812 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:41:13.831 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:41:13.985 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:41:13.985 1 INFO nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:41:14.077 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:41:14.096 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:41:14.176 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:41:14.176 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.745s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:41:17.289 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:41:56.462 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:41:57.310 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:42:03.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:42:03.293 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:42:04.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:42:11.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:42:12.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:42:12.293 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:42:12.293 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:42:12.314 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:42:14.314 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:42:14.330 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:42:14.350 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:42:14.441 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=5012MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:42:14.442 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:42:14.529 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:42:14.548 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:42:14.702 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:42:14.702 1 INFO nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:42:14.782 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:42:14.801 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:42:14.950 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:42:14.950 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.508s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:42:15.912 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:42:15.913 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:42:59.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:43:04.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:43:05.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:43:06.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:43:12.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:43:13.291 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:43:13.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:43:13.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:43:13.310 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:43:14.311 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:43:14.332 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:43:14.422 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=5012MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:43:14.422 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:43:14.508 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:43:14.529 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:43:14.674 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:43:14.675 1 INFO nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:43:14.820 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:43:14.840 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:43:14.917 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:43:14.918 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.496s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:43:16.899 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:43:16.914 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:43:16.915 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:43:18.289 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:44:00.311 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:44:04.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:44:05.306 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:44:05.306 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:44:06.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:44:13.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:44:14.291 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:44:14.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:44:14.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:44:14.311 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:44:15.311 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:44:15.334 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:44:15.422 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=5012MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:44:15.422 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:44:15.506 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:44:15.524 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:44:15.669 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:44:15.670 1 INFO nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:44:15.749 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:44:15.765 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:44:15.840 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:44:15.840 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.418s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:44:16.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:44:16.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7908 >2018-07-09 14:44:17.306 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:44:18.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:44:18.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:44:29.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:44:29.293 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7865 >2018-07-09 14:44:29.309 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7874 >2018-07-09 14:45:01.310 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:45:06.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:45:07.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:45:08.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:45:15.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:45:15.314 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:45:15.404 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=5011MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:45:15.404 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:45:15.489 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:45:15.508 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:45:15.652 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:45:15.652 1 INFO nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:45:15.743 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:45:15.761 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:45:15.837 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:45:15.838 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.433s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:45:15.838 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:45:16.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:45:16.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:45:16.293 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:45:16.311 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:45:17.311 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:45:20.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:45:20.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:45:21.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:46:01.310 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:46:08.288 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:46:08.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:46:09.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:46:16.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:46:16.313 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:46:16.408 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=5011MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:46:16.409 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:46:16.768 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:46:16.786 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:46:16.935 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:46:16.936 1 INFO nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:46:17.021 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:46:17.038 1 DEBUG nova.scheduler.client.report [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:46:17.115 1 DEBUG nova.compute.resource_tracker [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:46:17.116 1 DEBUG oslo_concurrency.lockutils [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.707s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:46:17.291 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:46:17.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:46:17.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:46:17.293 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:46:17.310 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:46:19.310 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:46:20.292 1 DEBUG oslo_service.periodic_task [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:46:20.292 1 DEBUG nova.compute.manager [req-5f6eb002-af01-4aa7-897a-ab042686a1b6 c63d372255b547688c3f4ebac60ee839 c00c33e0c4644d3694a3a96dc897e47a - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:46:53.179 1 DEBUG oslo_concurrency.lockutils [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" acquired by "nova.compute.manager._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:46:53.204 1 DEBUG nova.compute.manager [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Starting instance... _do_build_and_run_instance /usr/lib/python2.7/site-packages/nova/compute/manager.py:1792 >2018-07-09 14:46:53.306 1 DEBUG oslo_concurrency.lockutils [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.instance_claim" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:46:53.309 1 DEBUG nova.compute.resource_tracker [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Memory overhead for 64 MB instance; 0 MB instance_claim /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:202 >2018-07-09 14:46:53.309 1 DEBUG nova.compute.resource_tracker [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Disk overhead for 0 GB instance; 0 GB instance_claim /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:205 >2018-07-09 14:46:53.310 1 DEBUG nova.compute.resource_tracker [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] CPU overhead for 1 vCPUs instance; 0 vCPU(s) instance_claim /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:208 >2018-07-09 14:46:53.325 1 INFO nova.compute.claims [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Attempting claim on node compute-0.localdomain: memory 64 MB, disk 0 GB, vcpus 1 CPU >2018-07-09 14:46:53.326 1 INFO nova.compute.claims [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Total memory: 6143 MB, used: 4096.00 MB >2018-07-09 14:46:53.326 1 INFO nova.compute.claims [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] memory limit not specified, defaulting to unlimited >2018-07-09 14:46:53.326 1 INFO nova.compute.claims [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Total disk: 39 GB, used: 0.00 GB >2018-07-09 14:46:53.327 1 INFO nova.compute.claims [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] disk limit not specified, defaulting to unlimited >2018-07-09 14:46:53.327 1 INFO nova.compute.claims [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Total vcpu: 4 VCPU, used: 0.00 VCPU >2018-07-09 14:46:53.327 1 INFO nova.compute.claims [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] vcpu limit not specified, defaulting to unlimited >2018-07-09 14:46:53.328 1 DEBUG nova.virt.hardware [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python2.7/site-packages/nova/virt/hardware.py:1561 >2018-07-09 14:46:53.329 1 INFO nova.compute.claims [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Claim successful on node compute-0.localdomain >2018-07-09 14:46:53.523 1 DEBUG nova.scheduler.client.report [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:46:53.542 1 DEBUG nova.scheduler.client.report [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:46:53.625 1 DEBUG oslo_concurrency.lockutils [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" released by "nova.compute.resource_tracker.instance_claim" :: held 0.318s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:46:53.626 1 DEBUG nova.compute.manager [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Start building networks asynchronously for instance. _build_resources /usr/lib/python2.7/site-packages/nova/compute/manager.py:2159 >2018-07-09 14:46:53.715 1 DEBUG nova.compute.manager [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Allocating IP information in the background. _allocate_network_async /usr/lib/python2.7/site-packages/nova/compute/manager.py:1372 >2018-07-09 14:46:53.715 1 DEBUG nova.network.neutronv2.api [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] allocate_for_instance() allocate_for_instance /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:913 >2018-07-09 14:46:53.732 1 DEBUG nova.block_device [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] block_device_list [] volume_in_mapping /usr/lib/python2.7/site-packages/nova/block_device.py:575 >2018-07-09 14:46:53.733 1 DEBUG nova.compute.manager [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Start building block device mappings for instance. _build_resources /usr/lib/python2.7/site-packages/nova/compute/manager.py:2190 >2018-07-09 14:46:53.780 1 DEBUG nova.network.neutronv2.api [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] No network configured allocate_for_instance /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:936 >2018-07-09 14:46:53.780 1 DEBUG nova.compute.manager [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Instance network_info: |[]| _allocate_network_async /usr/lib/python2.7/site-packages/nova/compute/manager.py:1386 >2018-07-09 14:46:54.202 1 DEBUG nova.compute.manager [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python2.7/site-packages/nova/compute/manager.py:2006 >2018-07-09 14:46:54.202 1 DEBUG nova.block_device [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] block_device_list [] volume_in_mapping /usr/lib/python2.7/site-packages/nova/block_device.py:575 >2018-07-09 14:46:54.203 1 INFO nova.virt.libvirt.driver [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Creating image >2018-07-09 14:46:54.236 1 DEBUG nova.virt.libvirt.storage.rbd_utils [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] rbd image c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de_disk does not exist __init__ /usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:74 >2018-07-09 14:46:54.270 1 DEBUG nova.virt.libvirt.storage.rbd_utils [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] rbd image c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de_disk does not exist __init__ /usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:74 >2018-07-09 14:46:54.274 1 DEBUG oslo_concurrency.lockutils [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "5b0904df4f49e357f8520a218e9db1e083b07d07" acquired by "nova.virt.libvirt.imagebackend.fetch_func_sync" :: waited 0.001s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:46:54.274 1 DEBUG oslo_concurrency.lockutils [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "5b0904df4f49e357f8520a218e9db1e083b07d07" released by "nova.virt.libvirt.imagebackend.fetch_func_sync" :: held 0.001s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:46:54.305 1 DEBUG nova.virt.libvirt.storage.rbd_utils [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] rbd image c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de_disk does not exist __init__ /usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:74 >2018-07-09 14:46:54.308 1 DEBUG oslo_concurrency.processutils [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5b0904df4f49e357f8520a218e9db1e083b07d07 c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372 >2018-07-09 14:46:54.460 1 DEBUG oslo_concurrency.processutils [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5b0904df4f49e357f8520a218e9db1e083b07d07 c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409 >2018-07-09 14:46:54.461 1 DEBUG nova.virt.libvirt.driver [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Ensure instance console log exists: /var/lib/nova/instances/c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de/console.log _ensure_console_log_for_instance /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:3274 >2018-07-09 14:46:54.462 1 DEBUG oslo_concurrency.lockutils [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:46:54.462 1 DEBUG oslo_concurrency.lockutils [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "vgpu_resources" released by "nova.virt.libvirt.driver._allocate_mdevs" :: held 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:46:54.463 1 DEBUG nova.virt.libvirt.driver [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'ide', 'mapping': {'disk': {'bus': 'virtio', 'boot_index': '1', 'type': 'disk', 'dev': u'vda'}, 'root': {'bus': 'virtio', 'boot_index': '1', 'type': 'disk', 'dev': u'vda'}}} image_meta=ImageMeta(checksum='7ef58c0f9aa6136021cb61a5d4f275e5',container_format='bare',created_at=2018-07-05T10:18:49Z,direct_url=<?>,disk_format='qcow2',id=52b458a6-e709-434c-8918-5183fe30e9d4,min_disk=0,min_ram=0,name='cirros-0.3.5-x86_64-uec.tar.gz',owner='ca310de510714829bfdfa7879154dc08',properties=ImageMetaProps,protected=<?>,size=8683335,status='active',tags=<?>,updated_at=2018-07-05T10:18:51Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'swap': None, 'root_device_name': u'/dev/vda', 'ephemerals': [], 'block_device_mapping': []} _get_guest_xml /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:5426 >2018-07-09 14:46:54.468 1 DEBUG nova.virt.libvirt.driver [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] CPU mode 'host-model' model '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:3840 >2018-07-09 14:46:54.468 1 DEBUG nova.virt.hardware [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Getting desirable topologies for flavor Flavor(created_at=2018-07-05T10:18:46Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={},flavorid='4fc65822-b6a3-4609-a0ff-abd67de0b86d',id=1,is_public=True,memory_mb=64,name='m1.nano',projects=<?>,root_gb=0,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='7ef58c0f9aa6136021cb61a5d4f275e5',container_format='bare',created_at=2018-07-05T10:18:49Z,direct_url=<?>,disk_format='qcow2',id=52b458a6-e709-434c-8918-5183fe30e9d4,min_disk=0,min_ram=0,name='cirros-0.3.5-x86_64-uec.tar.gz',owner='ca310de510714829bfdfa7879154dc08',properties=ImageMetaProps,protected=<?>,size=8683335,status='active',tags=<?>,updated_at=2018-07-05T10:18:51Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python2.7/site-packages/nova/virt/hardware.py:562 >2018-07-09 14:46:54.470 1 DEBUG nova.virt.hardware [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Flavor limits 0:0:0 _get_cpu_topology_constraints /usr/lib/python2.7/site-packages/nova/virt/hardware.py:310 >2018-07-09 14:46:54.470 1 DEBUG nova.virt.hardware [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Image limits 0:0:0 _get_cpu_topology_constraints /usr/lib/python2.7/site-packages/nova/virt/hardware.py:314 >2018-07-09 14:46:54.470 1 DEBUG nova.virt.hardware [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Flavor pref 0:0:0 _get_cpu_topology_constraints /usr/lib/python2.7/site-packages/nova/virt/hardware.py:346 >2018-07-09 14:46:54.471 1 DEBUG nova.virt.hardware [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Image pref 0:0:0 _get_cpu_topology_constraints /usr/lib/python2.7/site-packages/nova/virt/hardware.py:350 >2018-07-09 14:46:54.471 1 DEBUG nova.virt.hardware [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 _get_cpu_topology_constraints /usr/lib/python2.7/site-packages/nova/virt/hardware.py:390 >2018-07-09 14:46:54.471 1 DEBUG nova.virt.hardware [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python2.7/site-packages/nova/virt/hardware.py:566 >2018-07-09 14:46:54.472 1 DEBUG nova.virt.hardware [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python2.7/site-packages/nova/virt/hardware.py:429 >2018-07-09 14:46:54.472 1 DEBUG nova.virt.hardware [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python2.7/site-packages/nova/virt/hardware.py:456 >2018-07-09 14:46:54.473 1 DEBUG nova.virt.hardware [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python2.7/site-packages/nova/virt/hardware.py:571 >2018-07-09 14:46:54.473 1 DEBUG nova.virt.hardware [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python2.7/site-packages/nova/virt/hardware.py:597 >2018-07-09 14:46:54.478 1 DEBUG oslo_concurrency.processutils [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372 >2018-07-09 14:46:54.825 1 DEBUG oslo_concurrency.processutils [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409 >2018-07-09 14:46:54.836 1 DEBUG nova.objects.instance [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lazy-loading 'pci_devices' on Instance uuid c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:46:54.854 1 DEBUG nova.virt.libvirt.driver [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] End _get_guest_xml xml=<domain type="kvm"> > <uuid>c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de</uuid> > <name>instance-0000006b</name> > <memory>65536</memory> > <vcpu>1</vcpu> > <metadata> > <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0"> > <nova:package version="18.0.0-0.20180629234416.54343e9.el7ost"/> > <nova:name>tempest-VolumesAdminNegativeTest-server-1059317768</nova:name> > <nova:creationTime>2018-07-09 14:46:54</nova:creationTime> > <nova:flavor name="m1.nano"> > <nova:memory>64</nova:memory> > <nova:disk>0</nova:disk> > <nova:swap>0</nova:swap> > <nova:ephemeral>0</nova:ephemeral> > <nova:vcpus>1</nova:vcpus> > </nova:flavor> > <nova:owner> > <nova:user uuid="fd79cfc452c04158bd3d99c94a110dc5">tempest-VolumesAdminNegativeTest-1935450781</nova:user> > <nova:project uuid="d07b58ddf0d84309bffabd6abdddfc36">tempest-VolumesAdminNegativeTest-1935450781</nova:project> > </nova:owner> > <nova:root type="image" uuid="52b458a6-e709-434c-8918-5183fe30e9d4"/> > </nova:instance> > </metadata> > <sysinfo type="smbios"> > <system> > <entry name="manufacturer">Red Hat</entry> > <entry name="product">OpenStack Compute</entry> > <entry name="version">18.0.0-0.20180629234416.54343e9.el7ost</entry> > <entry name="serial">9acb7829-e53f-45ac-bc93-2c97c861fae8</entry> > <entry name="uuid">c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de</entry> > <entry name="family">Virtual Machine</entry> > </system> > </sysinfo> > <os> > <type>hvm</type> > <boot dev="hd"/> > <smbios mode="sysinfo"/> > </os> > <features> > <acpi/> > <apic/> > </features> > <cputune> > <shares>1024</shares> > </cputune> > <clock offset="utc"> > <timer name="pit" tickpolicy="delay"/> > <timer name="rtc" tickpolicy="catchup"/> > <timer name="hpet" present="no"/> > </clock> > <cpu mode="host-model" match="exact"> > <topology sockets="1" cores="1" threads="1"/> > </cpu> > <devices> > <disk type="network" device="disk"> > <driver type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="vms/c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de_disk"> > <host name="172.17.3.22" port="6789"/> > </source> > <auth username="openstack"> > <secret type="ceph" uuid="a7f60d80-8032-11e8-8802-52540031427e"/> > </auth> > <target bus="virtio" dev="vda"/> > </disk> > <serial type="pty"> > <log file="/var/lib/nova/instances/c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de/console.log" append="off"/> > </serial> > <input type="tablet" bus="usb"/> > <graphics type="vnc" autoport="yes" keymap="en-us" listen="172.17.1.13"/> > <video> > <model type="cirrus"/> > </video> > <memballoon model="virtio"> > <stats period="10"/> > </memballoon> > </devices> ></domain> > _get_guest_xml /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:5433 >2018-07-09 14:46:54.936 1 DEBUG nova.virt.libvirt.driver [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:8687 >2018-07-09 14:46:55.573 1 DEBUG nova.virt.driver [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Emitting event <LifecycleEvent: 1531147615.57, c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de => Resumed> emit_event /usr/lib/python2.7/site-packages/nova/virt/driver.py:1521 >2018-07-09 14:46:55.574 1 INFO nova.compute.manager [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] VM Resumed (Lifecycle Event) >2018-07-09 14:46:55.577 1 DEBUG nova.virt.libvirt.driver [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Guest created on hypervisor spawn /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:3080 >2018-07-09 14:46:55.585 1 INFO nova.virt.libvirt.driver [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Instance spawned successfully. >2018-07-09 14:46:55.586 1 INFO nova.compute.manager [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Took 1.38 seconds to spawn the instance on the hypervisor. >2018-07-09 14:46:55.586 1 DEBUG nova.compute.manager [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Checking state _get_power_state /usr/lib/python2.7/site-packages/nova/compute/manager.py:1166 >2018-07-09 14:46:55.619 1 DEBUG nova.compute.manager [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Checking state _get_power_state /usr/lib/python2.7/site-packages/nova/compute/manager.py:1166 >2018-07-09 14:46:55.623 1 DEBUG nova.compute.manager [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python2.7/site-packages/nova/compute/manager.py:1076 >2018-07-09 14:46:55.668 1 INFO nova.compute.manager [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] During sync_power_state the instance has a pending task (spawning). Skip. >2018-07-09 14:46:55.669 1 DEBUG nova.virt.driver [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Emitting event <LifecycleEvent: 1531147615.58, c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de => Started> emit_event /usr/lib/python2.7/site-packages/nova/virt/driver.py:1521 >2018-07-09 14:46:55.669 1 INFO nova.compute.manager [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] VM Started (Lifecycle Event) >2018-07-09 14:46:55.702 1 INFO nova.compute.manager [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Took 2.40 seconds to build instance. >2018-07-09 14:46:55.728 1 DEBUG nova.compute.manager [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Checking state _get_power_state /usr/lib/python2.7/site-packages/nova/compute/manager.py:1166 >2018-07-09 14:46:55.729 1 DEBUG oslo_concurrency.lockutils [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" released by "nova.compute.manager._locked_do_build_and_run_instance" :: held 2.550s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:46:55.732 1 DEBUG nova.compute.manager [req-c68156a9-7ecd-451e-bbbb-97ddb37efd14 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python2.7/site-packages/nova/compute/manager.py:1076 >2018-07-09 14:46:57.649 1 DEBUG oslo_concurrency.lockutils [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" acquired by "nova.compute.manager.do_reserve" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:46:57.666 1 DEBUG nova.block_device [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] block_device_list [] volume_in_mapping /usr/lib/python2.7/site-packages/nova/block_device.py:575 >2018-07-09 14:46:57.666 1 DEBUG nova.objects.instance [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lazy-loading 'flavor' on Instance uuid c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:46:57.731 1 DEBUG oslo_concurrency.lockutils [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" released by "nova.compute.manager.do_reserve" :: held 0.082s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:46:58.173 1 DEBUG oslo_concurrency.lockutils [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" acquired by "nova.compute.manager.do_attach_volume" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:46:58.173 1 INFO nova.compute.manager [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Attaching volume 3ff984c8-d23a-411a-9b5c-4db55726a5e7 to /dev/vdb >2018-07-09 14:46:58.184 1 DEBUG cinderclient.v3.client [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] REQ: curl -g -i -X GET http://172.17.1.10:8776/v3/d07b58ddf0d84309bffabd6abdddfc36/volumes/3ff984c8-d23a-411a-9b5c-4db55726a5e7 -H "Accept: application/json" -H "OpenStack-API-Version: volume 3.48" -H "User-Agent: python-cinderclient" -H "X-Auth-Token: {SHA1}27fb659bbceb8f65101abce9a890ac345ce78dc2" -H "X-OpenStack-Request-ID: req-cafd8158-391a-4cc7-81dd-bc0d0781d52a" _http_log_request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:448 >2018-07-09 14:46:58.536 1 DEBUG cinderclient.v3.client [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] RESP: [200] Content-Encoding: gzip Content-Length: 506 Content-Type: application/json Date: Mon, 09 Jul 2018 14:46:58 GMT OpenStack-API-Version: volume 3.48 Server: Apache Vary: OpenStack-API-Version,Accept-Encoding x-compute-request-id: req-994e5593-0821-41af-a7c3-2129137e90be x-openstack-request-id: req-994e5593-0821-41af-a7c3-2129137e90be _http_log_response /usr/lib/python2.7/site-packages/keystoneauth1/session.py:479 >2018-07-09 14:46:58.536 1 DEBUG cinderclient.v3.client [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] RESP BODY: {"volume": {"attachments": [], "links": [{"href": "http://172.17.1.10:8776/v3/d07b58ddf0d84309bffabd6abdddfc36/volumes/3ff984c8-d23a-411a-9b5c-4db55726a5e7", "rel": "self"}, {"href": "http://172.17.1.10:8776/d07b58ddf0d84309bffabd6abdddfc36/volumes/3ff984c8-d23a-411a-9b5c-4db55726a5e7", "rel": "bookmark"}], "availability_zone": "nova", "encrypted": false, "updated_at": "2018-07-09T14:46:58.000000", "replication_status": null, "snapshot_id": null, "id": "3ff984c8-d23a-411a-9b5c-4db55726a5e7", "size": 1, "user_id": "fd79cfc452c04158bd3d99c94a110dc5", "os-vol-tenant-attr:tenant_id": "d07b58ddf0d84309bffabd6abdddfc36", "metadata": {}, "status": "reserved", "description": null, "multiattach": false, "service_uuid": "b9156d9d-46d2-408a-8602-b15a0593a288", "source_volid": null, "consistencygroup_id": null, "name": "tempest-VolumesAdminNegativeTest-volume-29089545", "bootable": "false", "shared_targets": true, "volume_type": null, "group_id": null, "created_at": "2018-07-09T14:46:56.000000"}} _http_log_response /usr/lib/python2.7/site-packages/keystoneauth1/session.py:511 >2018-07-09 14:46:58.537 1 DEBUG cinderclient.v3.client [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] GET call to cinderv3 for http://172.17.1.10:8776/v3/d07b58ddf0d84309bffabd6abdddfc36/volumes/3ff984c8-d23a-411a-9b5c-4db55726a5e7 used request id req-994e5593-0821-41af-a7c3-2129137e90be request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:844 >2018-07-09 14:46:58.537 1 DEBUG oslo_concurrency.lockutils [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "b9156d9d-46d2-408a-8602-b15a0593a288" acquired by "nova.virt.block_device._do_locked_attach" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:46:58.538 1 DEBUG os_brick.utils [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] ==> get_connector_properties: call u"{'execute': None, 'my_ip': '172.17.1.13', 'enforce_multipath': True, 'host': 'compute-0.localdomain', 'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'multipath': False}" trace_logging_wrapper /usr/lib/python2.7/site-packages/os_brick/utils.py:146 >2018-07-09 14:46:58.539 732 DEBUG oslo.privsep.daemon [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] privsep: request[140039495209840]: (3, 'os_brick.privileged.rootwrap.execute_root', ('cat', '/etc/iscsi/initiatorname.iscsi'), {}) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443 >2018-07-09 14:46:58.540 732 DEBUG oslo_concurrency.processutils [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372 >2018-07-09 14:46:58.545 732 DEBUG oslo_concurrency.processutils [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409 >2018-07-09 14:46:58.545 732 DEBUG oslo.privsep.daemon [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] privsep: reply[140039495209840]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ffa7a367add0\n', '')) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456 >2018-07-09 14:46:58.546 1 DEBUG os_brick.initiator.linuxfc [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] No Fibre Channel support detected on system. get_fc_hbas /usr/lib/python2.7/site-packages/os_brick/initiator/linuxfc.py:134 >2018-07-09 14:46:58.546 1 DEBUG os_brick.initiator.linuxfc [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] No Fibre Channel support detected on system. get_fc_hbas /usr/lib/python2.7/site-packages/os_brick/initiator/linuxfc.py:134 >2018-07-09 14:46:58.547 1 DEBUG os_brick.utils [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] <== get_connector_properties: return (7ms) {'initiator': u'iqn.1994-05.com.redhat:ffa7a367add0', 'ip': u'172.17.1.13', 'platform': u'x86_64', 'host': u'compute-0.localdomain', 'do_local_attach': False, 'os_type': u'linux2', 'multipath': False} trace_logging_wrapper /usr/lib/python2.7/site-packages/os_brick/utils.py:170 >2018-07-09 14:46:58.547 1 DEBUG nova.virt.block_device [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updating existing volume attachment record: 379fbf8a-5b61-4df6-b2a7-b4ae35b552bb _volume_attach /usr/lib/python2.7/site-packages/nova/virt/block_device.py:526 >2018-07-09 14:46:58.550 1 DEBUG cinderclient.v3.client [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] REQ: curl -g -i -X PUT http://172.17.1.10:8776/v3/d07b58ddf0d84309bffabd6abdddfc36/attachments/379fbf8a-5b61-4df6-b2a7-b4ae35b552bb -H "Accept: application/json" -H "Content-Type: application/json" -H "OpenStack-API-Version: volume 3.44" -H "User-Agent: python-cinderclient" -H "X-Auth-Token: {SHA1}27fb659bbceb8f65101abce9a890ac345ce78dc2" -H "X-OpenStack-Request-ID: req-cafd8158-391a-4cc7-81dd-bc0d0781d52a" -d '{"attachment": {"connector": {"initiator": "iqn.1994-05.com.redhat:ffa7a367add0", "ip": "172.17.1.13", "platform": "x86_64", "host": "compute-0.localdomain", "do_local_attach": false, "mountpoint": "/dev/vdb", "os_type": "linux2", "multipath": false}}}' _http_log_request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:448 >2018-07-09 14:46:59.518 1 DEBUG cinderclient.v3.client [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] RESP: [200] Content-Encoding: gzip Content-Length: 393 Content-Type: application/json Date: Mon, 09 Jul 2018 14:46:58 GMT OpenStack-API-Version: volume 3.44 Server: Apache Vary: OpenStack-API-Version,Accept-Encoding x-compute-request-id: req-d6009e8f-492d-458c-b5eb-2d916543e3e1 x-openstack-request-id: req-d6009e8f-492d-458c-b5eb-2d916543e3e1 _http_log_response /usr/lib/python2.7/site-packages/keystoneauth1/session.py:479 >2018-07-09 14:46:59.519 1 DEBUG cinderclient.v3.client [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] RESP BODY: {"attachment": {"status": "reserved", "detached_at": "", "connection_info": {"attachment_id": "379fbf8a-5b61-4df6-b2a7-b4ae35b552bb", "encrypted": false, "driver_volume_type": "rbd", "secret_uuid": "a7f60d80-8032-11e8-8802-52540031427e", "qos_specs": null, "volume_id": "3ff984c8-d23a-411a-9b5c-4db55726a5e7", "auth_username": "openstack", "secret_type": "ceph", "name": "volumes/volume-3ff984c8-d23a-411a-9b5c-4db55726a5e7", "discard": true, "keyring": null, "cluster_name": "ceph", "auth_enabled": true, "hosts": ["172.17.3.22"], "access_mode": "rw", "ports": ["6789"]}, "attached_at": "", "attach_mode": null, "instance": "c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de", "volume_id": "3ff984c8-d23a-411a-9b5c-4db55726a5e7", "id": "379fbf8a-5b61-4df6-b2a7-b4ae35b552bb"}} _http_log_response /usr/lib/python2.7/site-packages/keystoneauth1/session.py:511 >2018-07-09 14:46:59.519 1 DEBUG cinderclient.v3.client [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] PUT call to cinderv3 for http://172.17.1.10:8776/v3/d07b58ddf0d84309bffabd6abdddfc36/attachments/379fbf8a-5b61-4df6-b2a7-b4ae35b552bb used request id req-d6009e8f-492d-458c-b5eb-2d916543e3e1 request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:844 >2018-07-09 14:46:59.527 1 DEBUG nova.virt.libvirt.driver [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Attempting to attach volume 3ff984c8-d23a-411a-9b5c-4db55726a5e7 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:1393 >2018-07-09 14:46:59.529 1 DEBUG nova.virt.libvirt.guest [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] attach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-3ff984c8-d23a-411a-9b5c-4db55726a5e7"> > <host name="172.17.3.22" port="6789"/> > </source> > <auth username="openstack"> > <secret type="ceph" uuid="a7f60d80-8032-11e8-8802-52540031427e"/> > </auth> > <target bus="virtio" dev="vdb"/> > <serial>3ff984c8-d23a-411a-9b5c-4db55726a5e7</serial> ></disk> > attach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:302 >2018-07-09 14:46:59.659 1 DEBUG nova.virt.libvirt.driver [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:8687 >2018-07-09 14:46:59.660 1 DEBUG nova.virt.libvirt.driver [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:8687 >2018-07-09 14:46:59.702 1 DEBUG cinderclient.v3.client [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] REQ: curl -g -i -X POST http://172.17.1.10:8776/v3/d07b58ddf0d84309bffabd6abdddfc36/attachments/379fbf8a-5b61-4df6-b2a7-b4ae35b552bb/action -H "Accept: application/json" -H "Content-Type: application/json" -H "OpenStack-API-Version: volume 3.44" -H "User-Agent: python-cinderclient" -H "X-Auth-Token: {SHA1}27fb659bbceb8f65101abce9a890ac345ce78dc2" -H "X-OpenStack-Request-ID: req-cafd8158-391a-4cc7-81dd-bc0d0781d52a" -d '{"os-complete": null}' _http_log_request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:448 >2018-07-09 14:46:59.773 1 DEBUG cinderclient.v3.client [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] RESP: [204] Content-Type: application/json Date: Mon, 09 Jul 2018 14:46:59 GMT OpenStack-API-Version: volume 3.44 Server: Apache Vary: OpenStack-API-Version x-compute-request-id: req-be64065b-7de3-4287-bcc3-3953779667e5 x-openstack-request-id: req-be64065b-7de3-4287-bcc3-3953779667e5 _http_log_response /usr/lib/python2.7/site-packages/keystoneauth1/session.py:479 >2018-07-09 14:46:59.774 1 DEBUG cinderclient.v3.client [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] POST call to cinderv3 for http://172.17.1.10:8776/v3/d07b58ddf0d84309bffabd6abdddfc36/attachments/379fbf8a-5b61-4df6-b2a7-b4ae35b552bb/action used request id req-be64065b-7de3-4287-bcc3-3953779667e5 request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:844 >2018-07-09 14:46:59.774 1 DEBUG oslo_concurrency.lockutils [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "b9156d9d-46d2-408a-8602-b15a0593a288" released by "nova.virt.block_device._do_locked_attach" :: held 1.237s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:46:59.793 1 DEBUG nova.objects.instance [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lazy-loading 'flavor' on Instance uuid c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:46:59.839 1 DEBUG oslo_concurrency.lockutils [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" released by "nova.compute.manager.do_attach_volume" :: held 1.666s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:47:02.292 1 DEBUG oslo_service.periodic_task [req-cafd8158-391a-4cc7-81dd-bc0d0781d52a fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:47:03.081 1 INFO nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Detaching volume 3ff984c8-d23a-411a-9b5c-4db55726a5e7 >2018-07-09 14:47:03.091 1 DEBUG cinderclient.v3.client [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] REQ: curl -g -i -X GET http://172.17.1.10:8776/v3/d07b58ddf0d84309bffabd6abdddfc36/volumes/3ff984c8-d23a-411a-9b5c-4db55726a5e7 -H "Accept: application/json" -H "OpenStack-API-Version: volume 3.48" -H "User-Agent: python-cinderclient" -H "X-Auth-Token: {SHA1}27fb659bbceb8f65101abce9a890ac345ce78dc2" -H "X-OpenStack-Request-ID: req-5be5ac81-32d6-46bc-aa79-54aa92aa0634" _http_log_request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:448 >2018-07-09 14:47:03.178 1 DEBUG cinderclient.v3.client [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] RESP: [200] Content-Encoding: gzip Content-Length: 625 Content-Type: application/json Date: Mon, 09 Jul 2018 14:47:03 GMT OpenStack-API-Version: volume 3.48 Server: Apache Vary: OpenStack-API-Version,Accept-Encoding x-compute-request-id: req-cc995caa-6966-4693-a2d5-3f3ec7a5ae4c x-openstack-request-id: req-cc995caa-6966-4693-a2d5-3f3ec7a5ae4c _http_log_response /usr/lib/python2.7/site-packages/keystoneauth1/session.py:479 >2018-07-09 14:47:03.178 1 DEBUG cinderclient.v3.client [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] RESP BODY: {"volume": {"attachments": [{"server_id": "c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de", "attachment_id": "379fbf8a-5b61-4df6-b2a7-b4ae35b552bb", "attached_at": "2018-07-09T14:46:59.000000", "host_name": "compute-0.localdomain", "volume_id": "3ff984c8-d23a-411a-9b5c-4db55726a5e7", "device": "/dev/vdb", "id": "3ff984c8-d23a-411a-9b5c-4db55726a5e7"}], "links": [{"href": "http://172.17.1.10:8776/v3/d07b58ddf0d84309bffabd6abdddfc36/volumes/3ff984c8-d23a-411a-9b5c-4db55726a5e7", "rel": "self"}, {"href": "http://172.17.1.10:8776/d07b58ddf0d84309bffabd6abdddfc36/volumes/3ff984c8-d23a-411a-9b5c-4db55726a5e7", "rel": "bookmark"}], "availability_zone": "nova", "encrypted": false, "updated_at": "2018-07-09T14:47:03.000000", "replication_status": null, "snapshot_id": null, "id": "3ff984c8-d23a-411a-9b5c-4db55726a5e7", "size": 1, "user_id": "fd79cfc452c04158bd3d99c94a110dc5", "os-vol-tenant-attr:tenant_id": "d07b58ddf0d84309bffabd6abdddfc36", "metadata": {"attached_mode": "rw"}, "status": "detaching", "description": null, "multiattach": false, "service_uuid": "b9156d9d-46d2-408a-8602-b15a0593a288", "source_volid": null, "consistencygroup_id": null, "name": "tempest-VolumesAdminNegativeTest-volume-29089545", "bootable": "false", "shared_targets": true, "volume_type": null, "group_id": null, "created_at": "2018-07-09T14:46:56.000000"}} _http_log_response /usr/lib/python2.7/site-packages/keystoneauth1/session.py:511 >2018-07-09 14:47:03.179 1 DEBUG cinderclient.v3.client [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] GET call to cinderv3 for http://172.17.1.10:8776/v3/d07b58ddf0d84309bffabd6abdddfc36/volumes/3ff984c8-d23a-411a-9b5c-4db55726a5e7 used request id req-cc995caa-6966-4693-a2d5-3f3ec7a5ae4c request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:844 >2018-07-09 14:47:03.179 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "b9156d9d-46d2-408a-8602-b15a0593a288" acquired by "nova.virt.block_device._do_locked_detach" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:47:03.180 1 INFO nova.virt.block_device [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Attempting to driver detach volume 3ff984c8-d23a-411a-9b5c-4db55726a5e7 from mountpoint /dev/vdb >2018-07-09 14:47:03.186 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Attempting initial detach for device vdb detach_device_with_retry /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:426 >2018-07-09 14:47:03.186 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] detach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-3ff984c8-d23a-411a-9b5c-4db55726a5e7"> > <host name="172.17.3.22" port="6789"/> > </source> > <target bus="virtio" dev="vdb"/> > <serial>3ff984c8-d23a-411a-9b5c-4db55726a5e7</serial> > <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/> ></disk> > detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:477 >2018-07-09 14:47:08.200 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Successfully detached device vdb from guest. Persistent? 1. Live? True _try_detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:400 >2018-07-09 14:47:08.200 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Start retrying detach until device vdb is gone. detach_device_with_retry /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:442 >2018-07-09 14:47:08.201 1 DEBUG oslo.service.loopingcall [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Waiting for function nova.virt.libvirt.guest._do_wait_and_retry_detach to return. func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:478 >2018-07-09 14:47:08.203 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] detach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-3ff984c8-d23a-411a-9b5c-4db55726a5e7"> > <host name="172.17.3.22" port="6789"/> > </source> > <target bus="virtio" dev="vdb"/> > <serial>3ff984c8-d23a-411a-9b5c-4db55726a5e7</serial> > <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/> ></disk> > detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:477 >2018-07-09 14:47:08.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:47:09.288 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:47:09.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:47:13.209 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Successfully detached device vdb from guest. Persistent? False. Live? True _try_detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:400 >2018-07-09 14:47:13.209 1 DEBUG oslo.service.loopingcall [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Exception which is in the suggested list of exceptions occurred while invoking function: nova.virt.libvirt.guest._do_wait_and_retry_detach. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:456 >2018-07-09 14:47:15.210 1 DEBUG oslo.service.loopingcall [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Invoking nova.virt.libvirt.guest._do_wait_and_retry_detach; retry count is 1. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:449 >2018-07-09 14:47:15.213 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] detach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-3ff984c8-d23a-411a-9b5c-4db55726a5e7"> > <host name="172.17.3.22" port="6789"/> > </source> > <target bus="virtio" dev="vdb"/> > <serial>3ff984c8-d23a-411a-9b5c-4db55726a5e7</serial> > <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/> ></disk> > detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:477 >2018-07-09 14:47:17.291 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:47:17.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:47:17.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:47:17.318 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Acquired semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:47:17.318 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:47:17.319 1 DEBUG nova.objects.instance [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lazy-loading 'info_cache' on Instance uuid c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:47:17.687 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:47:17.738 1 DEBUG nova.network.base_api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:47:17.753 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Releasing semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:47:17.754 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:47:18.755 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:47:18.776 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:47:18.870 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:47:18.870 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:47:18.921 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4967MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:47:18.922 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:47:19.014 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:47:19.033 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:47:19.145 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:47:19.197 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Instance c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:47:19.197 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:47:19.198 1 INFO nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:47:19.280 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:47:19.296 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:47:19.372 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:47:19.373 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.451s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:47:19.373 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:47:20.219 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Successfully detached device vdb from guest. Persistent? False. Live? True _try_detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:400 >2018-07-09 14:47:20.219 1 DEBUG oslo.service.loopingcall [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Exception which is in the suggested list of exceptions occurred while invoking function: nova.virt.libvirt.guest._do_wait_and_retry_detach. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:456 >2018-07-09 14:47:21.911 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:47:21.928 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:47:21.929 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:47:23.288 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:47:24.220 1 DEBUG oslo.service.loopingcall [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Invoking nova.virt.libvirt.guest._do_wait_and_retry_detach; retry count is 2. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:449 >2018-07-09 14:47:24.222 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] detach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-3ff984c8-d23a-411a-9b5c-4db55726a5e7"> > <host name="172.17.3.22" port="6789"/> > </source> > <target bus="virtio" dev="vdb"/> > <serial>3ff984c8-d23a-411a-9b5c-4db55726a5e7</serial> > <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/> ></disk> > detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:477 >2018-07-09 14:47:29.228 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Successfully detached device vdb from guest. Persistent? False. Live? True _try_detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:400 >2018-07-09 14:47:29.228 1 DEBUG oslo.service.loopingcall [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Exception which is in the suggested list of exceptions occurred while invoking function: nova.virt.libvirt.guest._do_wait_and_retry_detach. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:456 >2018-07-09 14:47:35.229 1 DEBUG oslo.service.loopingcall [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Invoking nova.virt.libvirt.guest._do_wait_and_retry_detach; retry count is 3. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:449 >2018-07-09 14:47:35.232 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] detach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-3ff984c8-d23a-411a-9b5c-4db55726a5e7"> > <host name="172.17.3.22" port="6789"/> > </source> > <target bus="virtio" dev="vdb"/> > <serial>3ff984c8-d23a-411a-9b5c-4db55726a5e7</serial> > <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/> ></disk> > detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:477 >2018-07-09 14:47:40.239 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Successfully detached device vdb from guest. Persistent? False. Live? True _try_detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:400 >2018-07-09 14:47:40.240 1 DEBUG oslo.service.loopingcall [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Exception which is in the suggested list of exceptions occurred while invoking function: nova.virt.libvirt.guest._do_wait_and_retry_detach. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:456 >2018-07-09 14:47:48.240 1 DEBUG oslo.service.loopingcall [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Invoking nova.virt.libvirt.guest._do_wait_and_retry_detach; retry count is 4. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:449 >2018-07-09 14:47:48.242 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] detach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-3ff984c8-d23a-411a-9b5c-4db55726a5e7"> > <host name="172.17.3.22" port="6789"/> > </source> > <target bus="virtio" dev="vdb"/> > <serial>3ff984c8-d23a-411a-9b5c-4db55726a5e7</serial> > <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/> ></disk> > detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:477 >2018-07-09 14:47:53.248 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Successfully detached device vdb from guest. Persistent? False. Live? True _try_detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:400 >2018-07-09 14:47:53.249 1 DEBUG oslo.service.loopingcall [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Exception which is in the suggested list of exceptions occurred while invoking function: nova.virt.libvirt.guest._do_wait_and_retry_detach. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:456 >2018-07-09 14:48:03.250 1 DEBUG oslo.service.loopingcall [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Invoking nova.virt.libvirt.guest._do_wait_and_retry_detach; retry count is 5. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:449 >2018-07-09 14:48:03.252 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] detach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-3ff984c8-d23a-411a-9b5c-4db55726a5e7"> > <host name="172.17.3.22" port="6789"/> > </source> > <target bus="virtio" dev="vdb"/> > <serial>3ff984c8-d23a-411a-9b5c-4db55726a5e7</serial> > <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/> ></disk> > detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:477 >2018-07-09 14:48:03.317 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:48:08.257 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Successfully detached device vdb from guest. Persistent? False. Live? True _try_detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:400 >2018-07-09 14:48:08.258 1 DEBUG oslo.service.loopingcall [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Exception which is in the suggested list of exceptions occurred while invoking function: nova.virt.libvirt.guest._do_wait_and_retry_detach. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:456 >2018-07-09 14:48:09.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:48:10.288 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:48:11.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:48:18.291 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:48:18.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:48:18.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:48:18.293 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:48:18.314 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Acquired semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:48:18.315 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:48:18.315 1 DEBUG nova.objects.instance [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lazy-loading 'info_cache' on Instance uuid c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:48:18.407 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:48:18.466 1 DEBUG nova.network.base_api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:48:18.481 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Releasing semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:48:18.482 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:48:20.259 1 DEBUG oslo.service.loopingcall [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Invoking nova.virt.libvirt.guest._do_wait_and_retry_detach; retry count is 6. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:449 >2018-07-09 14:48:20.261 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] detach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-3ff984c8-d23a-411a-9b5c-4db55726a5e7"> > <host name="172.17.3.22" port="6789"/> > </source> > <target bus="virtio" dev="vdb"/> > <serial>3ff984c8-d23a-411a-9b5c-4db55726a5e7</serial> > <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/> ></disk> > detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:477 >2018-07-09 14:48:20.483 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:48:20.502 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:48:20.596 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:48:20.596 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:48:20.640 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4967MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:48:20.641 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:48:20.724 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:48:20.743 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:48:20.855 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:48:20.901 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Instance c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:48:20.901 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:48:20.902 1 INFO nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:48:20.983 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:48:20.999 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:48:21.078 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:48:21.079 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.438s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:48:22.889 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:48:22.889 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:48:23.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:48:25.267 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Successfully detached device vdb from guest. Persistent? False. Live? True _try_detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:400 >2018-07-09 14:48:25.267 1 DEBUG oslo.service.loopingcall [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Exception which is in the suggested list of exceptions occurred while invoking function: nova.virt.libvirt.guest._do_wait_and_retry_detach. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:456 >2018-07-09 14:48:37.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:48:37.293 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.do_register_storage_use" :: waited 0.001s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:48:37.295 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "storage-registry-lock" released by "nova.virt.storage_users.do_register_storage_use" :: held 0.001s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:48:37.296 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.do_get_storage_users" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:48:37.296 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "storage-registry-lock" released by "nova.virt.storage_users.do_get_storage_users" :: held 0.001s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:48:37.338 1 DEBUG nova.virt.libvirt.imagecache [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Verify base images _age_and_verify_cached_images /usr/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:348 >2018-07-09 14:48:37.338 1 DEBUG nova.virt.libvirt.imagecache [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Image id yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:355 >2018-07-09 14:48:37.339 1 DEBUG nova.virt.libvirt.imagecache [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Image id 52b458a6-e709-434c-8918-5183fe30e9d4 yields fingerprint 5b0904df4f49e357f8520a218e9db1e083b07d07 _age_and_verify_cached_images /usr/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:355 >2018-07-09 14:48:37.339 1 INFO nova.virt.libvirt.imagecache [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] image 52b458a6-e709-434c-8918-5183fe30e9d4 at (/var/lib/nova/instances/_base/5b0904df4f49e357f8520a218e9db1e083b07d07): checking >2018-07-09 14:48:37.340 1 DEBUG nova.virt.libvirt.imagecache [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] image 52b458a6-e709-434c-8918-5183fe30e9d4 at (/var/lib/nova/instances/_base/5b0904df4f49e357f8520a218e9db1e083b07d07): image is in use _mark_in_use /usr/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:329 >2018-07-09 14:48:37.342 1 INFO oslo.privsep.daemon [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/usr/share/nova/nova-dist.conf', '--config-file', '/etc/nova/nova.conf', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpAqpv5K/privsep.sock'] >2018-07-09 14:48:38.020 1 INFO oslo.privsep.daemon [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Spawned new privsep daemon via rootwrap >2018-07-09 14:48:38.021 1 DEBUG oslo.privsep.daemon [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Accepted privsep connection to /tmp/tmpAqpv5K/privsep.sock __init__ /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:331 >2018-07-09 14:48:37.964 2762 INFO oslo.privsep.daemon [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] privsep daemon starting >2018-07-09 14:48:37.968 2762 INFO oslo.privsep.daemon [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] privsep process running with uid/gid: 0/0 >2018-07-09 14:48:37.972 2762 INFO oslo.privsep.daemon [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none >2018-07-09 14:48:37.972 2762 INFO oslo.privsep.daemon [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] privsep daemon running as pid 2762 >2018-07-09 14:48:38.024 2762 DEBUG oslo.privsep.daemon [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] privsep: request[140039496814832]: (1,) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443 >2018-07-09 14:48:38.024 2762 DEBUG oslo.privsep.daemon [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] privsep: reply[140039496814832]: (2,) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456 >2018-07-09 14:48:38.025 2762 DEBUG oslo.privsep.daemon [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] privsep: request[140039496814832]: (3, 'nova.privsep.path.utime', ('/var/lib/nova/instances/_base/5b0904df4f49e357f8520a218e9db1e083b07d07',), {}) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443 >2018-07-09 14:48:38.092 2762 DEBUG oslo.privsep.daemon [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] privsep: reply[140039496814832]: (4, None) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456 >2018-07-09 14:48:38.094 1 DEBUG nova.virt.libvirt.imagecache [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de is a valid instance name _list_backing_images /usr/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:169 >2018-07-09 14:48:38.094 1 INFO nova.virt.libvirt.imagecache [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Active base files: /var/lib/nova/instances/_base/5b0904df4f49e357f8520a218e9db1e083b07d07 >2018-07-09 14:48:38.095 1 DEBUG nova.virt.libvirt.imagecache [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Verification complete _age_and_verify_cached_images /usr/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:384 >2018-07-09 14:48:38.095 1 DEBUG nova.virt.libvirt.imagecache [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Verify swap images _age_and_verify_swap_images /usr/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:333 >2018-07-09 14:48:39.268 1 DEBUG oslo.service.loopingcall [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Invoking nova.virt.libvirt.guest._do_wait_and_retry_detach; retry count is 7. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:449 >2018-07-09 14:48:39.271 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] detach device xml: <disk type="network" device="disk"> > <driver name="qemu" type="raw" cache="writeback" discard="unmap"/> > <source protocol="rbd" name="volumes/volume-3ff984c8-d23a-411a-9b5c-4db55726a5e7"> > <host name="172.17.3.22" port="6789"/> > </source> > <target bus="virtio" dev="vdb"/> > <serial>3ff984c8-d23a-411a-9b5c-4db55726a5e7</serial> > <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/> ></disk> > detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:477 >2018-07-09 14:48:44.277 1 DEBUG nova.virt.libvirt.guest [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Successfully detached device vdb from guest. Persistent? False. Live? True _try_detach_device /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:400 >2018-07-09 14:48:44.277 1 DEBUG oslo.service.loopingcall [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Exception which is in the suggested list of exceptions occurred while invoking function: nova.virt.libvirt.guest._do_wait_and_retry_detach. _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:456 >2018-07-09 14:48:44.278 1 DEBUG oslo.service.loopingcall [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Cannot retry nova.virt.libvirt.guest._do_wait_and_retry_detach upon suggested exception since retry count (7) reached max retry count (7). _func /usr/lib/python2.7/site-packages/oslo_service/loopingcall.py:466 >2018-07-09 14:48:44.278 1 ERROR oslo.service.loopingcall [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Dynamic interval looping call 'oslo_service.loopingcall._func' failed: DeviceDetachFailed: Device detach failed for vdb: Unable to detach from guest transient domain. >2018-07-09 14:48:44.278 1 ERROR oslo.service.loopingcall Traceback (most recent call last): >2018-07-09 14:48:44.278 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 193, in _run_loop >2018-07-09 14:48:44.278 1 ERROR oslo.service.loopingcall result = func(*self.args, **self.kw) >2018-07-09 14:48:44.278 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 471, in _func >2018-07-09 14:48:44.278 1 ERROR oslo.service.loopingcall return self._sleep_time >2018-07-09 14:48:44.278 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ >2018-07-09 14:48:44.278 1 ERROR oslo.service.loopingcall self.force_reraise() >2018-07-09 14:48:44.278 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise >2018-07-09 14:48:44.278 1 ERROR oslo.service.loopingcall six.reraise(self.type_, self.value, self.tb) >2018-07-09 14:48:44.278 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 450, in _func >2018-07-09 14:48:44.278 1 ERROR oslo.service.loopingcall result = f(*args, **kwargs) >2018-07-09 14:48:44.278 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 457, in _do_wait_and_retry_detach >2018-07-09 14:48:44.278 1 ERROR oslo.service.loopingcall device=alternative_device_name, reason=reason) >2018-07-09 14:48:44.278 1 ERROR oslo.service.loopingcall DeviceDetachFailed: Device detach failed for vdb: Unable to detach from guest transient domain. >2018-07-09 14:48:44.278 1 ERROR oslo.service.loopingcall >2018-07-09 14:48:44.279 1 WARNING nova.virt.block_device [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Guest refused to detach volume 3ff984c8-d23a-411a-9b5c-4db55726a5e7: DeviceDetachFailed: Device detach failed for vdb: Unable to detach from guest transient domain. >2018-07-09 14:48:44.279 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "b9156d9d-46d2-408a-8602-b15a0593a288" released by "nova.virt.block_device._do_locked_detach" :: held 101.100s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Exception during message handling: DeviceDetachFailed: Device detach failed for vdb: Unable to detach from guest transient domain. >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server Traceback (most recent call last): >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 265, in dispatch >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 194, in _do_dispatch >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 79, in wrapped >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server function_name, call_dict, binary, tb) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server self.force_reraise() >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 69, in wrapped >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/compute/utils.py", line 1085, in decorated_function >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 213, in decorated_function >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server kwargs['instance'], e, sys.exc_info()) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server self.force_reraise() >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 201, in decorated_function >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5497, in detach_volume >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server attachment_id=attachment_id) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5450, in _detach_volume >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server attachment_id=attachment_id, destroy_bdm=destroy_bdm) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 429, in detach >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server attachment_id, destroy_bdm) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server return f(*args, **kwargs) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 426, in _do_locked_detach >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server self._do_detach(*args, **_kwargs) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 355, in _do_detach >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server self.driver_detach(context, instance, volume_api, virt_driver) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 324, in driver_detach >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server {'vol': volume_id}, instance=instance) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server self.force_reraise() >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 314, in driver_detach >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server encryption=encryption) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1594, in detach_volume >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server wait_for_detach() >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 479, in func >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server return evt.wait() >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server return hubs.get_hub().switch() >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, in switch >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server return self.greenlet.switch() >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 193, in _run_loop >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server result = func(*self.args, **self.kw) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 471, in _func >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server return self._sleep_time >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server self.force_reraise() >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 450, in _func >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server result = f(*args, **kwargs) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 457, in _do_wait_and_retry_detach >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server device=alternative_device_name, reason=reason) >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server DeviceDetachFailed: Device detach failed for vdb: Unable to detach from guest transient domain. >2018-07-09 14:48:44.321 1 ERROR oslo_messaging.rpc.server >2018-07-09 14:49:04.095 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:49:04.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:49:11.305 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:49:12.288 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:49:13.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:49:18.291 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:49:18.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:49:18.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:49:18.313 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Acquired semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:49:18.313 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:49:18.313 1 DEBUG nova.objects.instance [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lazy-loading 'info_cache' on Instance uuid c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:49:18.408 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:49:18.460 1 DEBUG nova.network.base_api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:49:18.477 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Releasing semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:49:18.477 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:49:20.478 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:49:22.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:49:22.313 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:49:22.414 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:49:22.414 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:49:22.462 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4936MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:49:22.462 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:49:22.563 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:49:22.582 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:49:22.698 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:49:22.751 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Instance c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:49:22.751 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:49:22.752 1 INFO nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:49:22.846 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:49:22.867 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:49:22.947 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:49:22.947 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.485s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:49:22.948 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:49:22.948 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7908 >2018-07-09 14:49:23.962 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:49:23.980 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:49:23.980 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:49:24.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_bandwidth_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:49:25.288 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:49:42.314 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:49:42.315 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7865 >2018-07-09 14:49:42.333 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7874 >2018-07-09 14:50:04.311 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:50:12.294 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:50:13.288 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:50:15.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:50:18.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:50:18.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:50:18.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:50:18.313 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Acquired semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:50:18.314 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:50:18.314 1 DEBUG nova.objects.instance [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lazy-loading 'info_cache' on Instance uuid c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:50:18.779 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:50:18.843 1 DEBUG nova.network.base_api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:50:18.858 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Releasing semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:50:18.859 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:50:20.860 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:50:24.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:50:24.308 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:50:24.330 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:50:24.436 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:50:24.436 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:50:24.483 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4936MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:50:24.484 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:50:24.575 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:50:24.593 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:50:24.709 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:50:24.763 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Instance c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:50:24.764 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:50:24.764 1 INFO nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:50:24.868 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:50:24.889 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:50:24.969 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:50:24.970 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.486s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:50:25.954 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:50:25.955 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:51:04.291 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:51:14.288 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:51:14.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:51:16.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:51:20.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:51:20.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:51:20.293 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:51:20.315 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Acquired semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:51:20.316 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:51:20.316 1 DEBUG nova.objects.instance [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lazy-loading 'info_cache' on Instance uuid c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:51:20.689 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:51:20.744 1 DEBUG nova.network.base_api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:51:20.760 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Releasing semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:51:20.760 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:51:21.761 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:51:24.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:51:24.312 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:51:24.408 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:51:24.408 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:51:24.454 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4936MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:51:24.455 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:51:24.886 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:51:24.905 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:51:25.014 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:51:25.061 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Instance c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:51:25.062 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:51:25.063 1 INFO nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:51:25.145 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:51:25.165 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:51:25.241 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:51:25.242 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.787s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:51:26.288 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:51:26.314 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:51:27.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:51:27.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:52:04.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:52:15.288 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:52:16.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:52:16.441 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:52:16.463 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Triggering sync for uuid c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de _sync_power_states /usr/lib/python2.7/site-packages/nova/compute/manager.py:7229 >2018-07-09 14:52:16.464 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" acquired by "nova.compute.manager.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:52:16.512 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" released by "nova.compute.manager.query_driver_power_state_and_sync" :: held 0.048s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:52:17.315 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:52:20.291 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:52:20.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:52:20.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:52:20.313 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Acquired semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:52:20.313 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:52:20.314 1 DEBUG nova.objects.instance [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lazy-loading 'info_cache' on Instance uuid c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:52:20.702 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:52:20.762 1 DEBUG nova.network.base_api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:52:20.777 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Releasing semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:52:20.777 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:52:22.778 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:52:24.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:52:24.314 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:52:24.414 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:52:24.415 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:52:24.464 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4936MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:52:24.464 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:52:24.555 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:52:24.575 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:52:24.690 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:52:24.745 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Instance c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:52:24.745 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:52:24.746 1 INFO nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:52:24.837 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:52:24.856 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:52:24.940 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:52:24.941 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.476s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:52:27.941 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:52:28.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:52:28.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:53:05.291 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:53:16.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:53:17.288 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:53:18.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:53:22.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:53:22.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:53:22.293 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:53:22.293 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:53:22.313 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Acquired semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:53:22.313 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:53:22.314 1 DEBUG nova.objects.instance [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lazy-loading 'info_cache' on Instance uuid c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:53:22.675 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:53:22.736 1 DEBUG nova.network.base_api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:53:22.751 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Releasing semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:53:22.751 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:53:25.752 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:53:25.773 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:53:25.879 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:53:25.880 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:53:25.926 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4936MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:53:25.926 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:53:26.018 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:53:26.038 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:53:26.157 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:53:26.210 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Instance c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:53:26.211 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:53:26.211 1 INFO nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:53:26.304 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:53:26.326 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:53:26.409 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:53:26.409 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.483s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:53:27.949 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:53:28.288 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:53:29.316 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:53:29.317 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:54:05.291 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:54:06.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:54:17.304 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:54:18.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:54:19.288 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:54:23.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:54:23.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:54:23.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:54:23.312 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Acquired semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:54:23.312 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:54:23.312 1 DEBUG nova.objects.instance [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lazy-loading 'info_cache' on Instance uuid c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:54:23.411 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:54:23.468 1 DEBUG nova.network.base_api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:54:23.483 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Releasing semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:54:23.483 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:54:24.484 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:54:27.293 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:54:27.316 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:54:27.412 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:54:27.413 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:54:27.458 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4936MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:54:27.458 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:54:27.551 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:54:27.572 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:54:27.752 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:54:27.803 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Instance c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:54:27.804 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:54:27.804 1 INFO nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:54:27.892 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:54:27.910 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:54:27.991 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:54:27.992 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.534s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:54:28.991 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:54:29.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:54:29.293 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python2.7/site-packages/nova/compute/manager.py:7908 >2018-07-09 14:54:30.307 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:54:30.308 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:54:50.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:54:50.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Cleaning up deleted instances _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7865 >2018-07-09 14:54:50.310 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] There are 0 instances to clean _run_pending_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7874 >2018-07-09 14:55:06.311 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:55:18.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:55:18.293 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:55:19.288 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:55:25.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:55:25.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:55:25.293 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:55:25.312 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Acquired semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:55:25.313 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:55:25.313 1 DEBUG nova.objects.instance [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lazy-loading 'info_cache' on Instance uuid c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:55:25.409 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:55:25.465 1 DEBUG nova.network.base_api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:55:25.481 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Releasing semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:55:25.482 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:55:26.482 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:55:27.293 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:55:27.313 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:55:27.411 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:55:27.412 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:55:27.459 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4936MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:55:27.459 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:55:27.547 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:55:27.565 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:55:27.677 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:55:27.730 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Instance c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:55:27.731 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:55:27.731 1 INFO nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:55:27.819 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:55:27.836 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:55:27.914 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:55:27.914 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.455s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:55:28.913 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:55:32.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:55:32.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:55:33.288 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:56:07.317 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:56:18.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:56:18.293 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:56:21.288 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:56:25.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:56:25.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:56:25.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:56:25.313 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Acquired semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212 >2018-07-09 14:56:25.313 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1414 >2018-07-09 14:56:25.314 1 DEBUG nova.objects.instance [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lazy-loading 'info_cache' on Instance uuid c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1094 >2018-07-09 14:56:25.704 1 DEBUG nova.network.neutronv2.api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:56:25.758 1 DEBUG nova.network.base_api [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:56:25.773 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Releasing semaphore "refresh_cache-c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228 >2018-07-09 14:56:25.773 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6817 >2018-07-09 14:56:26.774 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:56:28.293 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:56:28.315 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:56:28.411 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:56:28.412 1 DEBUG nova.virt.libvirt.driver [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7947 >2018-07-09 14:56:28.460 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4936MB free_disk=39GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:56:28.461 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:56:28.811 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:56:28.830 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:56:28.953 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute driver doesn't require allocation refresh and we're on a compute host in a deployment that only has compute hosts with Nova versions >=16 (Pike). Skipping auto-correction of allocations. _update_usage_from_instances /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1235 >2018-07-09 14:56:29.009 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Instance c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de actively managed on this compute host and has allocations in placement: {u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}. _remove_deleted_instances_allocations /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1257 >2018-07-09 14:56:29.010 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:56:29.010 1 INFO nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4160MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=1 pci_stats=[] >2018-07-09 14:56:29.098 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:56:29.117 1 DEBUG nova.scheduler.client.report [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:56:29.194 1 DEBUG nova.compute.resource_tracker [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:56:29.194 1 DEBUG oslo_concurrency.lockutils [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.734s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:56:30.194 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:56:34.292 1 DEBUG oslo_service.periodic_task [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:56:34.292 1 DEBUG nova.compute.manager [req-5be5ac81-32d6-46bc-aa79-54aa92aa0634 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:57:07.535 1 DEBUG oslo_concurrency.lockutils [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" acquired by "nova.compute.manager.do_terminate_instance" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:57:07.535 1 DEBUG oslo_concurrency.lockutils [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de-events" acquired by "nova.compute.manager._clear_events" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:57:07.536 1 DEBUG oslo_concurrency.lockutils [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de-events" released by "nova.compute.manager._clear_events" :: held 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:57:07.539 1 INFO nova.compute.manager [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Terminating instance >2018-07-09 14:57:07.542 1 DEBUG nova.compute.manager [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python2.7/site-packages/nova/compute/manager.py:2353 >2018-07-09 14:57:07.751 1 INFO nova.virt.libvirt.driver [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Instance destroyed successfully. >2018-07-09 14:57:07.841 1 INFO nova.virt.libvirt.driver [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Deleting instance files /var/lib/nova/instances/c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de_del >2018-07-09 14:57:07.842 1 INFO nova.virt.libvirt.driver [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Deletion of /var/lib/nova/instances/c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de_del complete >2018-07-09 14:57:07.938 1 INFO nova.compute.manager [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Took 0.40 seconds to destroy the instance on the hypervisor. >2018-07-09 14:57:07.938 1 DEBUG nova.compute.manager [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Deallocating network for instance _deallocate_network /usr/lib/python2.7/site-packages/nova/compute/manager.py:1644 >2018-07-09 14:57:07.939 1 DEBUG nova.network.neutronv2.api [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] deallocate_for_instance() deallocate_for_instance /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:1280 >2018-07-09 14:57:08.373 1 DEBUG nova.network.neutronv2.api [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py:2372 >2018-07-09 14:57:08.389 1 DEBUG nova.network.base_api [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48 >2018-07-09 14:57:08.406 1 INFO nova.compute.manager [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Took 0.47 seconds to deallocate network for instance. >2018-07-09 14:57:08.409 1 DEBUG cinderclient.v3.client [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] REQ: curl -g -i -X DELETE http://172.17.1.10:8776/v3/d07b58ddf0d84309bffabd6abdddfc36/attachments/379fbf8a-5b61-4df6-b2a7-b4ae35b552bb -H "Accept: application/json" -H "OpenStack-API-Version: volume 3.44" -H "User-Agent: python-cinderclient" -H "X-Auth-Token: {SHA1}27fb659bbceb8f65101abce9a890ac345ce78dc2" -H "X-OpenStack-Request-ID: req-17e7107e-8783-4075-97c8-b82aee982a22" _http_log_request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:448 >2018-07-09 14:57:08.670 1 DEBUG cinderclient.v3.client [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] RESP: [200] Content-Length: 19 Content-Type: application/json Date: Mon, 09 Jul 2018 14:57:08 GMT OpenStack-API-Version: volume 3.44 Server: Apache Vary: OpenStack-API-Version x-compute-request-id: req-5cddfba1-a592-4fec-8d8f-b7489e6bd476 x-openstack-request-id: req-5cddfba1-a592-4fec-8d8f-b7489e6bd476 _http_log_response /usr/lib/python2.7/site-packages/keystoneauth1/session.py:479 >2018-07-09 14:57:08.671 1 DEBUG cinderclient.v3.client [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] RESP BODY: {"attachments": []} _http_log_response /usr/lib/python2.7/site-packages/keystoneauth1/session.py:511 >2018-07-09 14:57:08.672 1 DEBUG cinderclient.v3.client [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] DELETE call to cinderv3 for http://172.17.1.10:8776/v3/d07b58ddf0d84309bffabd6abdddfc36/attachments/379fbf8a-5b61-4df6-b2a7-b4ae35b552bb used request id req-5cddfba1-a592-4fec-8d8f-b7489e6bd476 request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:844 >2018-07-09 14:57:08.672 1 INFO nova.compute.manager [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Took 0.27 seconds to detach 1 volumes for instance. >2018-07-09 14:57:08.861 1 DEBUG oslo_concurrency.lockutils [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.update_usage" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:57:08.989 1 DEBUG nova.scheduler.client.report [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:57:09.011 1 DEBUG nova.scheduler.client.report [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:57:09.110 1 DEBUG oslo_concurrency.lockutils [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" released by "nova.compute.resource_tracker.update_usage" :: held 0.249s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:57:09.145 1 INFO nova.scheduler.client.report [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Deleted allocation for instance c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de >2018-07-09 14:57:09.164 1 DEBUG oslo_concurrency.lockutils [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de" released by "nova.compute.manager.do_terminate_instance" :: held 1.629s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:57:09.292 1 DEBUG oslo_service.periodic_task [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:57:19.292 1 DEBUG oslo_service.periodic_task [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:57:20.292 1 DEBUG oslo_service.periodic_task [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:57:21.288 1 DEBUG oslo_service.periodic_task [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:57:22.749 1 DEBUG nova.virt.driver [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Emitting event <LifecycleEvent: 1531148227.75, c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de => Stopped> emit_event /usr/lib/python2.7/site-packages/nova/virt/driver.py:1521 >2018-07-09 14:57:22.750 1 INFO nova.compute.manager [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] VM Stopped (Lifecycle Event) >2018-07-09 14:57:22.790 1 DEBUG nova.compute.manager [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] [instance: c134f16e-9f8a-4fd2-9e3c-6c5ae6a432de] Checking state _get_power_state /usr/lib/python2.7/site-packages/nova/compute/manager.py:1166 >2018-07-09 14:57:25.292 1 DEBUG oslo_service.periodic_task [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:57:25.292 1 DEBUG nova.compute.manager [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:57:25.292 1 DEBUG nova.compute.manager [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:57:25.308 1 DEBUG nova.compute.manager [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:57:27.309 1 DEBUG oslo_service.periodic_task [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:57:29.292 1 DEBUG oslo_service.periodic_task [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:57:29.314 1 DEBUG nova.compute.resource_tracker [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:57:29.412 1 DEBUG nova.compute.resource_tracker [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4977MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:57:29.413 1 DEBUG oslo_concurrency.lockutils [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:57:29.504 1 DEBUG nova.scheduler.client.report [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:57:29.525 1 DEBUG nova.scheduler.client.report [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:57:29.682 1 DEBUG nova.compute.resource_tracker [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:57:29.683 1 INFO nova.compute.resource_tracker [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:57:29.766 1 DEBUG nova.scheduler.client.report [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:57:29.787 1 DEBUG nova.scheduler.client.report [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:57:29.868 1 DEBUG nova.compute.resource_tracker [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:57:29.869 1 DEBUG oslo_concurrency.lockutils [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.455s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:57:30.869 1 DEBUG oslo_service.periodic_task [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:57:33.288 1 DEBUG oslo_service.periodic_task [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:57:36.310 1 DEBUG oslo_service.periodic_task [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:57:36.310 1 DEBUG nova.compute.manager [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421 >2018-07-09 14:58:09.292 1 DEBUG oslo_service.periodic_task [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:58:19.292 1 DEBUG oslo_service.periodic_task [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:58:20.292 1 DEBUG oslo_service.periodic_task [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:58:21.288 1 DEBUG oslo_service.periodic_task [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:58:25.292 1 DEBUG oslo_service.periodic_task [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:58:25.292 1 DEBUG nova.compute.manager [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6755 >2018-07-09 14:58:25.292 1 DEBUG nova.compute.manager [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6759 >2018-07-09 14:58:25.309 1 DEBUG nova.compute.manager [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:6831 >2018-07-09 14:58:29.309 1 DEBUG oslo_service.periodic_task [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:58:29.331 1 DEBUG nova.compute.resource_tracker [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Auditing locally available compute resources for compute-0.localdomain (node: compute-0.localdomain) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:669 >2018-07-09 14:58:29.428 1 DEBUG nova.compute.resource_tracker [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Hypervisor/Node resource view: name=compute-0.localdomain free_ram=4979MB free_disk=39GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "product_id": "1003", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1003", "address": "0000:00:07.0"}, {"dev_id": "pci_0000_00_00_0", "product_id": "1237", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_1237", "address": "0000:00:00.0"}, {"dev_id": "pci_0000_00_06_0", "product_id": "2934", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2934", "address": "0000:00:06.0"}, {"dev_id": "pci_0000_00_02_0", "product_id": "00b8", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1013", "label": "label_1013_00b8", "address": "0000:00:02.0"}, {"dev_id": "pci_0000_00_01_0", "product_id": "7000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7000", "address": "0000:00:01.0"}, {"dev_id": "pci_0000_00_03_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:03.0"}, {"dev_id": "pci_0000_00_09_0", "product_id": "1002", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1002", "address": "0000:00:09.0"}, {"dev_id": "pci_0000_00_06_1", "product_id": "2935", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2935", "address": "0000:00:06.1"}, {"dev_id": "pci_0000_00_04_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:04.0"}, {"dev_id": "pci_0000_00_08_0", "product_id": "1001", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1001", "address": "0000:00:08.0"}, {"dev_id": "pci_0000_00_01_3", "product_id": "7113", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7113", "address": "0000:00:01.3"}, {"dev_id": "pci_0000_00_06_2", "product_id": "2936", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_2936", "address": "0000:00:06.2"}, {"dev_id": "pci_0000_00_01_1", "product_id": "7010", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_7010", "address": "0000:00:01.1"}, {"dev_id": "pci_0000_00_06_7", "product_id": "293a", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "8086", "label": "label_8086_293a", "address": "0000:00:06.7"}, {"dev_id": "pci_0000_00_05_0", "product_id": "1000", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1000", "address": "0000:00:05.0"}, {"dev_id": "pci_0000_00_0a_0", "product_id": "1005", "dev_type": "type-PCI", "numa_node": null, "vendor_id": "1af4", "label": "label_1af4_1005", "address": "0000:00:0a.0"}] _report_hypervisor_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:808 >2018-07-09 14:58:29.428 1 DEBUG oslo_concurrency.lockutils [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 >2018-07-09 14:58:29.523 1 DEBUG nova.scheduler.client.report [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:58:29.545 1 DEBUG nova.scheduler.client.report [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:58:29.700 1 DEBUG nova.compute.resource_tracker [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:824 >2018-07-09 14:58:29.701 1 INFO nova.compute.resource_tracker [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Final resource view: name=compute-0.localdomain phys_ram=6143MB used_ram=4096MB phys_disk=39GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] >2018-07-09 14:58:29.796 1 DEBUG nova.scheduler.client.report [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing aggregate associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, aggregates: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:781 >2018-07-09 14:58:29.818 1 DEBUG nova.scheduler.client.report [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Refreshing trait associations for resource provider 15fc0fc9-a8e0-45b4-bdf7-b7f02dff83ba, traits: None _refresh_associations /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:792 >2018-07-09 14:58:29.898 1 DEBUG nova.compute.resource_tracker [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Compute_service record updated for compute-0.localdomain:compute-0.localdomain _update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:764 >2018-07-09 14:58:29.899 1 DEBUG oslo_concurrency.lockutils [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Lock "compute_resources" released by "nova.compute.resource_tracker._update_available_resource" :: held 0.470s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >2018-07-09 14:58:29.899 1 DEBUG oslo_service.periodic_task [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:58:30.882 1 DEBUG oslo_service.periodic_task [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:58:38.292 1 DEBUG oslo_service.periodic_task [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215 >2018-07-09 14:58:38.292 1 DEBUG nova.compute.manager [req-17e7107e-8783-4075-97c8-b82aee982a22 fd79cfc452c04158bd3d99c94a110dc5 d07b58ddf0d84309bffabd6abdddfc36 - default default] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:7421
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1599372
: 1457521