RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 845675 - Storage Live Migration with Live Guest Migration in qemu-kvm
Summary: Storage Live Migration with Live Guest Migration in qemu-kvm
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Paolo Bonzini
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 845674 845676 845679 869944 927252 1056726 1080820 1113727 1122703
TreeView+ depends on / blocked
 
Reported: 2012-08-03 20:19 UTC by Karen Noel
Modified: 2014-07-23 19:57 UTC (History)
8 users (show)

Fixed In Version: 1.3
Doc Type: Release Note
Doc Text:
[Note: this feature is only available in RHEV Hypervisor and is therefore not part of the RHEL 7.0 Beta.] Live Migration with non-shared storage Storage live migration can now be performed in parallel with live migration of a guest. Therefore, the entire running guest can be moved to another host, even with a non-shared storage.
Clone Of: 845674
: 869944 (view as bug list)
Environment:
Last Closed: 2014-06-13 10:01:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:0985 0 normal SHIPPED_LIVE qemu-kvm bug fix and enhancement update 2014-07-29 23:34:16 UTC

Description Karen Noel 2012-08-03 20:19:26 UTC
Support in qemu-kvm to move a guest's storage and do live migration of the guest at the same time.

+++ This bug was initially created as a clone of Bug #845674 +++

Description of problem:

Support storage live migration with live guest migration, so an entire running guest can be moved to another host, even with non-shared storage.

Comment 1 Paolo Bonzini 2013-01-11 13:21:01 UTC
This consists of these 1.3 features:

* A new block job is supported: live disk mirroring (also known as "storage migration") moves data from an image to another. A new command "block-job-complete" is used to switch the VM to use the destination image exclusively.

* QEMU embeds an NBD server, accessible via the monitor. The NBD server allows live access to the image seen by the VM. Note that the embedded server uses "named exports", which QEMU can access using the "nbd://host:port/name" syntax. 

* The monitor now remains responsive during incoming migration. The new NBD server is also available during incoming migration.

====

For testing, the process should be as follows:

1) Optionally, copy the base images from the source machine to the destination disk.

2) Create empty images on the destination disk, using "qemu-img create".  Point them to the base images copied in step 1, if that step was done.

3) Start the destination QEMU and sets up the NBD server using the nbd-server-start and nbd-server-add commands.

 { "execute": "nbd-server-start", "arguments": {
   "addr": { "type": "inet",
             "data": { "host": "10.34.56.78", "port": "12345" } } } }

 { "execute": "nbd-server-add", "arguments": {
   "addr": { "device": "drive-virtio0", "writable": true" } }

 { "execute": "nbd-server-add", "arguments": {
   "addr": { "device": "drive-ide0-hd0", "writable": true" } }

 etc.

4) Invoke drive-mirror on the source, once for each migrated disk, with a destination pointing to the remote NBD server.  URI syntax can be used for drive-mirror's target argument, for example nbd://10.34.56.78:12345/diskname (where diskname is the -drive id specified on the destination).  Use "sync: 'top'" if the base images exist on the destination, "sync: 'full'" otherwise.

The format must _always_ be raw!  This is independent of the format of the disk on the source.

 { "execute": "drive-mirror",
   "arguments": { "device": "drive-ide-hd0",
                  "target": "nbd://10.34.56.78:12345/drive-virtio0",
                  "sync": "full", "format": "raw" } }

 { "execute": "drive-mirror",
   "arguments": { "device": "drive-ide-hd0",
                  "target": "nbd://10.34.56.78:12345/drive-ide0-hd0",
                  "sync": "full", "format": "raw" } }

5) once all mirroring jobs reach steady state, invoke the migrate command.

6) once migration completes, quit the source QEMU and invoke the nbd-server-stop command on the destination QEMU.

 { "execute": "nbd-server-stop", "arguments": { } }

All steps can also be executed via the human monitor too.  The commands are named with underscores instead of dashes, however.

Comment 3 Shaolong Hu 2013-01-23 09:02:07 UTC
Hi Paolo,

After i execute:

 { "execute": "nbd-server-add", "arguments": {
   "addr": { "device": "drive-virtio0", "writable": true" } }

qmp connection get stuck without return, however, i test hmp cmd, it works.

I am not sure there is a bug or just wrong cmd, i notice it misses a " in front of true, but this is not the problem. I look into qmp-command.hx, finding the instuction is different from our rhel6 internal version in qemu-monitor.hx:


{ 'command': 'nbd-server-start',
  'data': { 'addr': 'SocketAddress' } }

{ 'command': 'nbd-server-add', 'data': {'device': 'str', '*writable': 'bool'} }


Is this the upstream style or sth new? I can not tell what the final qmp cmd i should use from this, and now our internal qemu-kvm-rhel7 tree simply bases on upstream 1.3, so a little tips would be great.

Comment 4 Paolo Bonzini 2013-01-23 11:45:23 UTC
That's a typo, sorry:

 { "execute": "nbd-server-add",
   "arguments": { "device": "drive-virtio0", "writable": true" } }

Comment 5 Paolo Bonzini 2013-01-23 11:45:47 UTC
That's a typo, sorry:

 { "execute": "nbd-server-add",
   "arguments": { "device": "drive-virtio0", "writable": true } }

Comment 6 Shaolong Hu 2013-01-24 03:35:55 UTC
(In reply to comment #5)
> That's a typo, sorry:
> 
>  { "execute": "nbd-server-add",
>    "arguments": { "device": "drive-virtio0", "writable": true } }

Remove the ", the command still hangs, won't return.

Comment 9 Shaolong Hu 2013-02-01 07:05:32 UTC
Current status:

1. on des host, after issue qmp command "nbd-server-add", qmp connection hangs, command won't return:

{ "execute": "nbd-server-start", "arguments": { "addr": { "type": "inet", "data": { "host": "10.66.71.224", "port": "3333" } } } }
{"return": {}}
{ "execute": "nbd-server-add", "arguments": { "addr": { "device": "drive-virtio-disk0", "writable": true } }

2. instead, on des host, use HMP command to add block, it works:

nbd_server_add -w drive-virtio-disk0

3. on src host, start mirroring:

{ "execute": "drive-mirror", "arguments": { "device": "drive-virtio-disk0", "target": "nbd://10.66.71.224:3333/drive-virtio-disk0", "sync": "full", "format": "raw", "mode": "existing" } }

4. after reaching steady state, do migration, it finishes correctly.


Functionally works fine, the only problem is qmp nbd-server-add command won't return.

Comment 12 Shaolong Hu 2014-01-17 05:05:02 UTC
Verified on qemu-kvm-rhev-1.5.3-31.el7.x86_64:

CMD:

on des host:

{ "execute": "qmp_capabilities" }
{"return": {}}

{ "execute": "nbd-server-start", "arguments": { "addr": { "type": "inet", "data": { "host": "10.66.5.84", "port": "3333" } } } }
{"return": {}}

{ "execute": "nbd-server-add", "arguments": { "device": "drive-virtio-disk0", "writable": true } }
{"return": {}}

on src host:

{ "execute": "qmp_capabilities" }
{"return": {}}

{ "execute": "drive-mirror", "arguments": { "device": "drive-virtio-disk0", "target": "nbd://10.66.5.84:3333/drive-virtio-disk0", "sync": "full", "format": "raw", "mode": "existing" } }
{"return": {}}

Comment 14 Ludek Smid 2014-06-13 10:01:19 UTC
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.


Note You need to log in before you can comment on or make changes to this bug.