Bug 1003467 - Backport migration fixes from post qemu 1.6
Backport migration fixes from post qemu 1.6
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm (Show other bugs)
Unspecified Linux
unspecified Severity high
: rc
: ---
Assigned To: Dr. David Alan Gilbert
Virtualization Bugs
Depends On:
  Show dependency treegraph
Reported: 2013-09-02 02:48 EDT by Orit Wasserman
Modified: 2014-06-17 23:35 EDT (History)
6 users (show)

See Also:
Fixed In Version: qemu-kvm-1.5.3-38.el7
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2014-06-13 08:01:50 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Orit Wasserman 2013-09-02 02:48:16 EDT
Description of problem:
Fixes include:

1. pc: drop external DSDT loading

    This breaks migration and is unneeded with modern SeaBIOS.

2. migration: don't use uninitialized variables

    The qmp_migrate method uses the 'blk' and 'inc' parameter without
    checking if they're valid or not (they may be uninitialized if
    command is received via QMP)

3.migration: send total time in QMP at "completed" stage

   The "completed" stage sets total_time but not has_total_time and
    thus it is not sent via QMP reply (but sent via HMP nevertheless)
4. migration: fix spice migration

    Commit 29ae8a4133082e16970c9d4be09f4b6a15034617 ("rdma: introduce
    MIG_STATE_NONE and change MIG_STATE_SETUP state transition") changed the
    state transitions during migration setup.
    Spice used to be notified with MIG_STATE_ACTIVE and it detected this
    using migration_is_active().  Spice is now notified with
    MIG_STATE_SETUP and migration_is_active() no longer works.
    Replace migration_is_active() with migration_in_setup() to fix spice
5. migration: notify migration state before starting thread
    The migration thread runs outside the QEMU global mutex when possible.
    Therefore we must notify migration state change *before* starting the
    migration thread.
    This allows registered listeners to act before live migration iterations
    begin.  Therefore they can get into a state that allows for live
    migration.  When the migration thread starts everything will be ready.
    Without this patch there is a race condition during migration setup,
    depending on whether the migration thread has already transitioned from
    SETUP to ACTIVE state.
    Acked-by: Paolo Bonzini <pbonzini@redhat.com>
    Reviewed-by: Kevin Wolf <kwolf@redhat.com>
    Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

6. target-i386: Disable PMU CPUID leaf by default
    Bug description: QEMU currently gets all bits from GET_SUPPORTED_CPUID
    for CPUID leaf 0xA and passes them directly to the guest. This makes
    the guest ABI depend on host kernel and host CPU capabilities, and
    breaks live migration if we migrate between hosts with different
    capabilities (e.g., different number of PMU counters).
    Add a "pmu" property to X86CPU, and set it to true only on "-cpu host",
    or on pc-*-1.5 and older machine-types.
    For now, setting pmu=on will enable the current passthrough mode that
    doesn't have any ABI stability guarantees, but in the future we may
    implement a mode where the PMU CPUID bits are stable and configurable.
    Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
    Cc: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Andreas Färber <afaerber@suse.de>

7. migration: add autoconvergence documentation
    This hunk got lost during merge.  It is documentation.

8. Fix real mode guest segments dpl value in savevm
    Older KVM version put invalid value in the segments registers dpl field
    real mode guests (0x3).
    This breaks migration from those hosts to hosts with unrestricted guest
    We detect it by checking CS dpl value for real mode guest and fix the dpl
    of all the segment registers.
9. Fix real mode guest migration
    Older KVM versions save CS dpl value to an invalid value for real mode
    (0x3). This patch detect this situation when loading CPU state and set
    all the
    segments dpl to zero.
    This will allow migration from older KVM on host without unrestricted
    to hosts with restricted guest support.
    For example migration from a Penryn host (with kernel 2.6.32) to
    a Westmere host (for real mode guest) will fail with "kvm: unhandled exit

10. block-migration: efficiently encode zero blocks
    this patch adds a efficient encoding for zero blocks by
    adding a new flag indicating a block is completely zero.
    additionally bdrv_write_zeros() is used at the destination
    to efficiently write these zeroes. depending on the implementation
    this avoids that the destination target gets fully provisioned.
    Signed-off-by: Peter Lieven <pl@kamp.de>
    Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

11. block/raw: add bdrv_co_write_zeroes
12. block: add bdrv_write_zeroes()

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:

Actual results:

Expected results:

Additional info:
Comment 5 Dr. David Alan Gilbert 2014-01-14 04:12:59 EST
rate limit fix landed upstream as 40596834c0d57a223124a956ccbe39dfeadc9f0e
Comment 6 Miroslav Rezanina 2014-01-17 08:26:59 EST
Fix included in qemu-kvm-1.5.3-38.el7
Comment 9 Dr. David Alan Gilbert 2014-02-21 04:08:20 EST
Hi Qunfang,
  I think if you can make sure some of your tests are using spice (which I assume they are) that would be good.  Also try a test which sets the bandwidth limits and check that those work ok.

I think the test you list for item 9 should be fine; the important thing is to still be in  the BIOS during the migrate.
Comment 11 Dr. David Alan Gilbert 2014-02-21 04:59:03 EST
Yes, just set the migration speed to something slower and make sure it takes longer.
Comment 12 Qunfang Zhang 2014-02-24 00:42:13 EST
(In reply to Dr. David Alan Gilbert from comment #11)
> Yes, just set the migration speed to something slower and make sure it takes
> longer.

Yes, compare the migration total time for same vm on same hosts with 100MBps and default 30MBps speed, the former is quicker. 


/usr/libexec/qemu-kvm -cpu SandyBridge -M pc -enable-kvm -m 4096 -smp 2,sockets=2,cores=1,threads=1 -name rhel6.4-64 -uuid 9a0e67ec-f286-d8e7-0548-0c1c9ec93009 -nodefconfig -nodefaults -monitor stdio -rtc base=utc,clock=host,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x7 -drive file=/mnt/RHEL-Server-7.0-64-virtio.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,id=hostnet0,vhost=on -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:d5:51:8a,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=channel1,path=/tmp/helloworld1,server,nowait -device virtserialport,chardev=channel1,name=port1,bus=virtio-serial0.0,id=port1 -chardev socket,id=channel2,path=/tmp/helloworld2,server,nowait -device virtserialport,chardev=channel2,name=port2,bus=virtio-serial0.0,id=port2 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 -device usb-tablet,id=input0 -vnc :10 -vga std -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -drive if=none,id=drive-fdc0-0-0,format=raw,cache=none -global isa-fdc.driveA=drive-fdc0-0-0 -qmp tcp:0:5555,server,nowait -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 

100M migration speed:

(qemu) info migrate
capabilities: xbzrle: off x-rdma-pin-all: off auto-converge: off zero-blocks: off 
Migration status: completed
total time: 4015 milliseconds
downtime: 88 milliseconds
setup: 2 milliseconds
transferred ram: 346134 kbytes
throughput: 753.63 mbps
remaining ram: 0 kbytes
total ram: 4211416 kbytes
duplicate: 969761 pages
skipped: 0 pages
normal: 84238 pages
normal bytes: 336952 kbytes

30M migration speed: 

(qemu) info migrate
capabilities: xbzrle: off x-rdma-pin-all: off auto-converge: off zero-blocks: off 
Migration status: completed
total time: 11011 milliseconds
downtime: 101 milliseconds
setup: 2 milliseconds
transferred ram: 359230 kbytes
throughput: 268.57 mbps
remaining ram: 0 kbytes
total ram: 4211416 kbytes
duplicate: 968684 pages
skipped: 0 pages
normal: 87508 pages
normal bytes: 350032 kbytes
Comment 13 Qunfang Zhang 2014-02-24 00:43:35 EST
Setting to VERIFIED according comment 8, comment 9, comment 10, comment 12. Please fix me if something is missing.  Thanks.
Comment 15 Ludek Smid 2014-06-13 08:01:50 EDT
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.

Note You need to log in before you can comment on or make changes to this bug.