Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 928854 Details for
Bug 1132053
Device busy when detaching encrypted volume.
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
detach of in-use volume
file_1132053.txt (text/plain), 19.82 KB, created by
Jaroslav Henner
on 2014-08-20 14:43:37 UTC
(
hide
)
Description:
detach of in-use volume
Filename:
MIME Type:
Creator:
Jaroslav Henner
Created:
2014-08-20 14:43:37 UTC
Size:
19.82 KB
patch
obsolete
> >2014-08-20 10:42:54.453 16556 AUDIT nova.compute.manager [req-bb42f7d5-317e-4537-a55e-b2bbe29850a3 admin admin] [instance: aa285a66-f2d1-481e-8b5c-ad907d5e70ac] Detach volume f95a2ddf-86f2-4908-b5ab-dc97dff5b557 from mountpoint /dev/vdb >2014-08-20 10:42:54.459 16556 DEBUG nova.volume.cinder [req-bb42f7d5-317e-4537-a55e-b2bbe29850a3 admin admin] Cinderclient connection created using URL: http://172.16.40.15:8776/v1/6e65bc9635274edab3013039804c683d cinderclient /usr/lib/python2.6/site-packages/nova/volume/cinder.py:93 >2014-08-20 10:42:54.461 16556 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 172.16.40.15 >2014-08-20 10:42:54.597 16556 DEBUG urllib3.connectionpool [-] "GET /v1/6e65bc9635274edab3013039804c683d/volumes/f95a2ddf-86f2-4908-b5ab-dc97dff5b557/encryption HTTP/1.1" 200 209 _make_request /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295 >2014-08-20 10:42:54.607 16556 ERROR nova.compute.manager [req-bb42f7d5-317e-4537-a55e-b2bbe29850a3 admin admin] [instance: aa285a66-f2d1-481e-8b5c-ad907d5e70ac] Failed to detach volume f95a2ddf-86f2-4908-b5ab-dc97dff5b557 from /dev/vdb >2014-08-20 10:42:54.607 16556 TRACE nova.compute.manager [instance: aa285a66-f2d1-481e-8b5c-ad907d5e70ac] Traceback (most recent call last): >2014-08-20 10:42:54.607 16556 TRACE nova.compute.manager [instance: aa285a66-f2d1-481e-8b5c-ad907d5e70ac] File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 4203, in _detach_volume >2014-08-20 10:42:54.607 16556 TRACE nova.compute.manager [instance: aa285a66-f2d1-481e-8b5c-ad907d5e70ac] encryption=encryption) >2014-08-20 10:42:54.607 16556 TRACE nova.compute.manager [instance: aa285a66-f2d1-481e-8b5c-ad907d5e70ac] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1370, in detach_volume >2014-08-20 10:42:54.607 16556 TRACE nova.compute.manager [instance: aa285a66-f2d1-481e-8b5c-ad907d5e70ac] raise exception.DiskNotFound(location=disk_dev) >2014-08-20 10:42:54.607 16556 TRACE nova.compute.manager [instance: aa285a66-f2d1-481e-8b5c-ad907d5e70ac] DiskNotFound: No disk at vdb >2014-08-20 10:42:54.607 16556 TRACE nova.compute.manager [instance: aa285a66-f2d1-481e-8b5c-ad907d5e70ac] >2014-08-20 10:42:54.609 16556 DEBUG nova.volume.cinder [req-bb42f7d5-317e-4537-a55e-b2bbe29850a3 admin admin] Cinderclient connection created using URL: http://172.16.40.15:8776/v1/6e65bc9635274edab3013039804c683d cinderclient /usr/lib/python2.6/site-packages/nova/volume/cinder.py:93 >2014-08-20 10:42:54.611 16556 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 172.16.40.15 >2014-08-20 10:42:54.729 16556 DEBUG urllib3.connectionpool [-] "POST /v1/6e65bc9635274edab3013039804c683d/volumes/f95a2ddf-86f2-4908-b5ab-dc97dff5b557/action HTTP/1.1" 202 0 _make_request /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295 >2014-08-20 10:42:54.897 16556 DEBUG nova.openstack.common.lockutils [req-bb42f7d5-317e-4537-a55e-b2bbe29850a3 admin admin] Got semaphore "compute_resources" lock /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:168 >2014-08-20 10:42:54.898 16556 DEBUG nova.openstack.common.lockutils [req-bb42f7d5-317e-4537-a55e-b2bbe29850a3 admin admin] Got semaphore / lock "update_usage" inner /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:248 >2014-08-20 10:42:54.939 16556 DEBUG nova.openstack.common.lockutils [req-bb42f7d5-317e-4537-a55e-b2bbe29850a3 admin admin] Semaphore / lock released "update_usage" inner /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:252 >2014-08-20 10:42:54.951 16556 ERROR oslo.messaging.rpc.dispatcher [-] Exception during message handling: No disk at vdb >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher incoming.message)) >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args) >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.6/site-packages/nova/exception.py", line 88, in wrapped >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher payload) >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.6/site-packages/nova/exception.py", line 71, in wrapped >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, **kw) >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 274, in decorated_function >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher pass >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 260, in decorated_function >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 303, in decorated_function >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher e, sys.exc_info()) >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 290, in decorated_function >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 4240, in detach_volume >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher self._detach_volume(context, instance, bdm) >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 4210, in _detach_volume >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher self.volume_api.roll_detaching(context, volume_id) >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 4203, in _detach_volume >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher encryption=encryption) >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1370, in detach_volume >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher raise exception.DiskNotFound(location=disk_dev) >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher DiskNotFound: No disk at vdb >2014-08-20 10:42:54.951 16556 TRACE oslo.messaging.rpc.dispatcher >2014-08-20 10:42:54.954 16556 ERROR oslo.messaging._drivers.common [-] Returning exception No disk at vdb to caller >2014-08-20 10:42:54.954 16556 ERROR oslo.messaging._drivers.common [-] ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply\n incoming.message))\n', ' File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', ' File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch\n result = getattr(endpoint, method)(ctxt, **new_args)\n', ' File "/usr/lib/python2.6/site-packages/nova/exception.py", line 88, in wrapped\n payload)\n', ' File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', ' File "/usr/lib/python2.6/site-packages/nova/exception.py", line 71, in wrapped\n return f(self, context, *args, **kw)\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 274, in decorated_function\n pass\n', ' File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 260, in decorated_function\n return function(self, context, *args, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 303, in decorated_function\n e, sys.exc_info())\n', ' File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 290, in decorated_function\n return function(self, context, *args, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 4240, in detach_volume\n self._detach_volume(context, instance, bdm)\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 4210, in _detach_volume\n self.volume_api.roll_detaching(context, volume_id)\n', ' File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 4203, in _detach_volume\n encryption=encryption)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1370, in detach_volume\n raise exception.DiskNotFound(location=disk_dev)\n', 'DiskNotFound: No disk at vdb\n'] >2014-08-20 10:42:55.555 16556 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python2.6/site-packages/nova/openstack/common/periodic_task.py:178 >2014-08-20 10:42:55.556 16556 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python2.6/site-packages/nova/openstack/common/periodic_task.py:178 >2014-08-20 10:42:55.556 16556 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.6/site-packages/nova/openstack/common/periodic_task.py:178 >2014-08-20 10:42:55.557 16556 DEBUG nova.openstack.common.lockutils [-] Got semaphore "compute_resources" lock /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:168 >2014-08-20 10:42:55.557 16556 DEBUG nova.openstack.common.lockutils [-] Got semaphore / lock "update_available_resource" inner /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:248 >2014-08-20 10:42:55.558 16556 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources >2014-08-20 10:42:55.558 16556 DEBUG nova.virt.libvirt.driver [-] Updating host stats update_status /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py:5320 >2014-08-20 10:42:55.672 16556 DEBUG nova.openstack.common.processutils [-] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/aa285a66-f2d1-481e-8b5c-ad907d5e70ac/disk execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:154 >2014-08-20 10:42:55.698 16556 DEBUG nova.openstack.common.processutils [-] Result was 0 execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:187 >2014-08-20 10:42:55.699 16556 DEBUG nova.openstack.common.processutils [-] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/aa285a66-f2d1-481e-8b5c-ad907d5e70ac/disk execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:154 >2014-08-20 10:42:55.718 16556 DEBUG nova.openstack.common.processutils [-] Result was 0 execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:187 >2014-08-20 10:42:55.730 16556 DEBUG nova.openstack.common.processutils [-] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83894851-f636-4810-9d1b-d3b9adf0863d/disk execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:154 >2014-08-20 10:42:55.748 16556 DEBUG nova.openstack.common.processutils [-] Result was 0 execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:187 >2014-08-20 10:42:55.749 16556 DEBUG nova.openstack.common.processutils [-] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83894851-f636-4810-9d1b-d3b9adf0863d/disk execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:154 >2014-08-20 10:42:55.766 16556 DEBUG nova.openstack.common.processutils [-] Result was 0 execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:187 >2014-08-20 10:42:55.777 16556 DEBUG nova.openstack.common.processutils [-] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/46b4d6f2-3ed5-4d4e-b9af-e41a226b522d/disk execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:154 >2014-08-20 10:42:55.795 16556 DEBUG nova.openstack.common.processutils [-] Result was 0 execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:187 >2014-08-20 10:42:55.797 16556 DEBUG nova.openstack.common.processutils [-] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/46b4d6f2-3ed5-4d4e-b9af-e41a226b522d/disk execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:154 >2014-08-20 10:42:55.815 16556 DEBUG nova.openstack.common.processutils [-] Result was 0 execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:187 >2014-08-20 10:42:55.878 16556 DEBUG nova.compute.resource_tracker [-] Hypervisor: free ram (MB): 3362 _report_hypervisor_resource_view /usr/lib/python2.6/site-packages/nova/compute/resource_tracker.py:409 >2014-08-20 10:42:55.879 16556 DEBUG nova.compute.resource_tracker [-] Hypervisor: free disk (GB): 74 _report_hypervisor_resource_view /usr/lib/python2.6/site-packages/nova/compute/resource_tracker.py:410 >2014-08-20 10:42:55.880 16556 DEBUG nova.compute.resource_tracker [-] Hypervisor: free VCPUs: 1 _report_hypervisor_resource_view /usr/lib/python2.6/site-packages/nova/compute/resource_tracker.py:415 >2014-08-20 10:42:55.880 16556 DEBUG nova.compute.resource_tracker [-] Hypervisor: assignable PCI devices: [] _report_hypervisor_resource_view /usr/lib/python2.6/site-packages/nova/compute/resource_tracker.py:422 >2014-08-20 10:42:56.006 16556 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 6270 >2014-08-20 10:42:56.007 16556 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 76 >2014-08-20 10:42:56.007 16556 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 1 >2014-08-20 10:42:56.050 16556 INFO nova.compute.resource_tracker [-] Compute_service record updated for is-busy.novalocal:is-busy.novalocal >2014-08-20 10:42:56.051 16556 DEBUG nova.openstack.common.lockutils [-] Semaphore / lock released "update_available_resource" inner /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:252 >2014-08-20 10:42:56.097 16556 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python2.6/site-packages/nova/openstack/common/periodic_task.py:178 >2014-08-20 10:42:56.097 16556 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.6/site-packages/nova/openstack/common/periodic_task.py:178 >2014-08-20 10:42:56.098 16556 DEBUG nova.compute.manager [-] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.6/site-packages/nova/compute/manager.py:5430 >2014-08-20 10:42:56.098 16556 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python2.6/site-packages/nova/openstack/common/periodic_task.py:178 >2014-08-20 10:42:56.099 16556 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python2.6/site-packages/nova/openstack/common/periodic_task.py:178 >2014-08-20 10:42:56.100 16556 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python2.6/site-packages/nova/openstack/common/periodic_task.py:178 >2014-08-20 10:42:56.100 16556 DEBUG nova.openstack.common.periodic_task [-] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python2.6/site-packages/nova/openstack/common/periodic_task.py:178 >2014-08-20 10:42:56.101 16556 DEBUG nova.compute.manager [-] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.6/site-packages/nova/compute/manager.py:4851 >2014-08-20 10:42:56.531 16556 DEBUG nova.network.api [-] Updating cache with info: [VIF({'ovs_interfaceid': None, 'network': Network({'bridge': u'br100', 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': u'fixed', 'floating_ips': [], 'address': u'192.168.32.4'})], 'version': 4, 'meta': {u'dhcp_server': u'192.168.32.1'}, 'dns': [IP({'meta': {}, 'version': 4, 'type': u'dns', 'address': u'8.8.4.4'})], 'routes': [], 'cidr': u'192.168.32.0/22', 'gateway': IP({'meta': {}, 'version': 4, 'type': u'gateway', 'address': u'192.168.32.1'})}), Subnet({'ips': [], 'version': None, 'meta': {u'dhcp_server': None}, 'dns': [], 'routes': [], 'cidr': None, 'gateway': IP({'meta': {}, 'version': None, 'type': u'gateway', 'address': None})})], 'meta': {u'tenant_id': None, u'should_create_bridge': True, u'bridge_interface': u'lo'}, 'id': u'94db1988-71f8-4e26-b3ad-64bd824f9fc3', 'label': u'novanetwork'}), 'devname': None, 'qbh_params': None, 'meta': {}, 'details': {}, 'address': u'fa:16:3e:c0:c8:46', 'active': False, 'type': u'bridge', 'id': u'7adb3d78-e59a-46d5-9dcf-f104221a4295', 'qbg_params': None})] update_instance_cache_with_nw_info /usr/lib/python2.6/site-packages/nova/network/api.py:75 >2014-08-20 10:42:56.562 16556 DEBUG nova.compute.manager [-] [instance: 83894851-f636-4810-9d1b-d3b9adf0863d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.6/site-packages/nova/compute/manager.py:4912 >2014-08-20 10:42:56.563 16556 DEBUG nova.openstack.common.loopingcall [-] Dynamic looping call sleeping for 60.00 seconds _inner /usr/lib/python2.6/site-packages/nova/openstack/common/loopingcall.py:132
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1132053
: 928854 |
928856
|
928858