Hide Forgot
+++ This bug was initially created as a clone of Bug #1419480 +++ Description of problem: When boot an instance from volume created from an instance, like: # nova boot --flavor m1.tiny --block-device source=image,id=5875cad9-4662-41da-b390-4638df154e01,dest=volume,size=1,shutdown=preserve,bootindex=0 --nic net-id=a5b46d6b-e89f-4399-ac62-1f33dee55662 cirros-nfs-from-volume With this the source_type of the volume is image # mysql -u root nova -e "select source_type,volume_id from block_device_mapping where volume_id='89aa74f6-459a-41da-a1c5-584f975c96a5'\G;" *************************** 1. row *************************** source_type: image volume_id: 89aa74f6-459a-41da-a1c5-584f975c96a5 As a result if we migrate the volume to a different storage backend, the migration fails with: 2017-02-06 04:53:35.432 899 ERROR nova.compute.manager [req-62a58fdd-031f-4117-a959-7f9008eae1d0 7de1d918a88540ee9cf9a15d845dad8f 6741168c17524c24b79ce5af6373a7c4 - - -] [instance: 18bdbbd8-684d-49a4-92ea-aa2ee335ccf2] Failed to swap volume 89aa74f6-459a-41da-a1c5-584f975c96a5 for 46123a0e-e158-4c32-8e24-125105b0f6bc 2017-02-06 04:53:35.432 899 ERROR nova.compute.manager [instance: 18bdbbd8-684d-49a4-92ea-aa2ee335ccf2] Traceback (most recent call last): 2017-02-06 04:53:35.432 899 ERROR nova.compute.manager [instance: 18bdbbd8-684d-49a4-92ea-aa2ee335ccf2] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4861, in _swap_volume 2017-02-06 04:53:35.432 899 ERROR nova.compute.manager [instance: 18bdbbd8-684d-49a4-92ea-aa2ee335ccf2] resize_to) 2017-02-06 04:53:35.432 899 ERROR nova.compute.manager [instance: 18bdbbd8-684d-49a4-92ea-aa2ee335ccf2] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1210, in swap_volume 2017-02-06 04:53:35.432 899 ERROR nova.compute.manager [instance: 18bdbbd8-684d-49a4-92ea-aa2ee335ccf2] driver_bdm = driver_block_device.DriverVolumeBlockDevice(bdm) 2017-02-06 04:53:35.432 899 ERROR nova.compute.manager [instance: 18bdbbd8-684d-49a4-92ea-aa2ee335ccf2] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 105, in __init__ 2017-02-06 04:53:35.432 899 ERROR nova.compute.manager [instance: 18bdbbd8-684d-49a4-92ea-aa2ee335ccf2] self._transform() 2017-02-06 04:53:35.432 899 ERROR nova.compute.manager [instance: 18bdbbd8-684d-49a4-92ea-aa2ee335ccf2] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 207, in _transform 2017-02-06 04:53:35.432 899 ERROR nova.compute.manager [instance: 18bdbbd8-684d-49a4-92ea-aa2ee335ccf2] raise _InvalidType 2017-02-06 04:53:35.432 899 ERROR nova.compute.manager [instance: 18bdbbd8-684d-49a4-92ea-aa2ee335ccf2] _InvalidType 2017-02-06 04:53:35.432 899 ERROR nova.compute.manager [instance: 18bdbbd8-684d-49a4-92ea-aa2ee335ccf2] This got fixed upstream with : https://review.openstack.org/#/c/315864/ With this change the migration is successful Version-Release number of selected component (if applicable): OSP8 ,python-nova-12.0.4-8.el7ost.noarch How reproducible: always Steps to Reproduce: 1. boot instance from volume as above 2. migrate volume to different storage backend: # cinder migrate 89aa74f6-459a-41da-a1c5-584f975c96a5 osp8-controller@nfs2\#nfs2 Actual results: result in migration error Expected results: successful migration Additional info: --- Additional comment from Red Hat Bugzilla Rules Engine on 2017-02-06 10:07:52 GMT --- This bugzilla has been removed from the release and needs to be reviewed and Triaged for another Target Release. --- Additional comment from Matthew Booth on 2017-02-10 10:47:43 GMT --- https://code.engineering.redhat.com/gerrit/97408
https://code.engineering.redhat.com/gerrit/#/c/97410/
Verified it using the following steps: 1. set up 2 different cinder backends: [stack@undercloud-0 ~]$ cinder service-list +------------------+-------------------------+------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+-------------------------+------+---------+-------+----------------------------+-----------------+ | cinder-scheduler | hostgroup | nova | enabled | up | 2017-06-13T17:01:01.000000 | - | | cinder-volume | hostgroup@tripleo_iscsi | nova | enabled | up | 2017-06-13T17:00:59.000000 | - | | cinder-volume | hostgroup@tripleo_nfs | nova | enabled | up | 2017-06-13T17:00:59.000000 | - | +------------------+-------------------------+------+---------+-------+----------------------------+-----------------+ 2. boot VM: nova boot --flavor m1.tiny --block-device source=image,id=c04dedcd-73be-4160-886e-7d3f5eb8a9ed,dest=volume,size=1,shutdown=preserve,bootindex=0 vm4 +--------------------------------------+-------------------------------------------------+ | Property | Value | +--------------------------------------+-------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000004 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | HxE54RhGTtv9 | | config_drive | | | created | 2017-06-13T16:56:40Z | | flavor | m1.tiny (1) | | hostId | | | id | eb6981fc-24c4-4d35-b97d-379656458276 | | image | Attempt to boot from volume - no image supplied | | key_name | - | | metadata | {} | | name | vm4 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | 6c9a06dafef644efa168f3c869764c43 | | updated | 2017-06-13T16:56:40Z | | user_id | f3655d8699f34ac5bbef11922a34861f | +--------------------------------------+-------------------------------------------------+ 3. Migrate created volume to new host: [stack@undercloud-0 ~]$ cinder migrate 7c4f69fd-8c4f-4cd6-adb7-4601ecffe7bb hostgroup@tripleo_iscsi [stack@undercloud-0 ~]$
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1539