Bug 1312908 - Live migration (volume-backed) failed if launch instance from bootable volume with new blank block-devices
Summary: Live migration (volume-backed) failed if launch instance from bootable volume...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 7.0 (Kilo)
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
: 8.0 (Liberty)
Assignee: Matthew Booth
QA Contact: Joe H. Rahme
URL:
Whiteboard:
Depends On:
Blocks: 1316892 1316893 1316894 1316895
TreeView+ depends on / blocked
 
Reported: 2016-02-29 13:54 UTC by Yevgeniy Ovsyannikov
Modified: 2019-09-09 13:10 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1316892 1316893 1316894 1316895 (view as bug list)
Environment:
Last Closed: 2017-09-27 11:07:06 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
nova compute log (4.90 MB, text/plain)
2016-02-29 13:54 UTC, Yevgeniy Ovsyannikov
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1469006 0 None None None 2016-03-07 12:17:47 UTC
OpenStack gerrit 195885 0 None None None 2016-03-07 12:18:19 UTC
OpenStack gerrit 289302 0 None None None 2016-03-07 12:33:18 UTC

Description Yevgeniy Ovsyannikov 2016-02-29 13:54:09 UTC
Created attachment 1131531 [details]
nova compute log

Description of problem:

I'm trying to live-migrate instance which was created with blank block devices like:

nova boot --flavor m1.large --block-device source=image,id=5524dc31-fabe-47b5-95e7-53d915034272,dest=volume,size=24,shutdown=remove,bootindex=0 TEST --nic net-id=a31c345c-a7d8-4ae8-870d-6da30fc6c083 --block-device source=blank,dest=volume,size=10,shutdown=remove --block-device source=blank,dest=volume,size=1,format=swap,shutdown=remove

+--------------------------------------+------+--------+------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------+--------+------------+-------------+---------------------+
| 10b3414d-8a91-435d-9fe9-44b90837f519 | TEST | ACTIVE | - | Running | public=172.24.4.228 |
+--------------------------------------+------+--------+------------+-------------+---------------------+

Instance on compute1
When Im trying migrate to compute2 I get this error:

# nova live-migration TEST compute2
ERROR (BadRequest): compute1 is not on shared storage: Live migration can not be used without shared storage. (HTTP 400) (Request-ID: req-5963c4c1-d486-4778-97dc-a6af73b0db0d)

I have found work flow, but volumes have to created manually:

1. Create 2 basic volumes manually
2. Perform this command:

nova boot --flavor m1.large --block-device source=image,id=5524dc31-fabe-47b5-95e7-53d915034272,dest=volume,size=8,shutdown=remove,bootindex=0 TEST --nic net-id=087a89bf-b864-4208-9ac3-c638bb1ad1cc --block-device source=volume,dest=volume,id=94e2c86a-b56d-4b06-b220-ebca182d01d3,shutdown=remove --block-device source=volume,dest=volume,id=362e13f6-4cc3-4bb5-a99c-0f65e37abe5c,shutdown=remove

If create instance this way, live-migration work proper

no issue on the Liberty

Version-Release number of selected component (if applicable):

Kilo RHOSP Version: 2015.1.2

Comment 2 Matthew Booth 2016-03-07 12:13:47 UTC
Looks like this is fixed in upstream Liberty by https://review.openstack.org/#/c/195885/

Comment 3 Yevgeniy Ovsyannikov 2016-03-07 12:15:56 UTC
Could it be backport to the RHOSP7?

Comment 4 Matthew Booth 2016-03-07 12:19:20 UTC
(In reply to Yevgeniy Ovsyannikov from comment #3)
> Could it be backport to the RHOSP7?

Looking into it. See if I can get a backport to upstream stable kilo, first.

Comment 5 Matthew Booth 2016-03-07 12:34:12 UTC
Looks like a clean backport: https://review.openstack.org/#/c/289302/

Will wait to see if it's likely to go upstream first.

Comment 6 Mike McCune 2016-03-28 23:03:07 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions


Note You need to log in before you can comment on or make changes to this bug.