Bug 1588356 - qemu crashed on the source host when do storage migration with source qcow2 disk created by 'qemu-img'
Summary: qemu crashed on the source host when do storage migration with source qcow2 d...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: ---
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: rc
: ---
Assignee: Hanna Czenczek
QA Contact: aihua liang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-07 06:45 UTC by yafu
Modified: 2019-11-06 07:11 UTC (History)
15 users (show)

Fixed In Version: qemu-kvm-4.0.0-5.module+el8.1.0+3622+5812d9bf
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-06 07:11:37 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:3723 0 None None None 2019-11-06 07:11:52 UTC

Description yafu 2018-06-07 06:45:03 UTC
Description of problem:
qemu crashed on the source host when do storage migration with source qcow2 disk created by 'qemu-img'.

Version-Release number of selected component (if applicable):
qemu-kvm-rhev-2.12.0-3.el7.x86_64
qemu-img-rhev-2.12.0-3.el7.x86_64
libvirt-4.3.0-1.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Use'qemu-img' to create a qcow2 image:
#qemu-img create -f qcow2 /var/lib/libvirt/images/a.qcow2 1024M

2.Attach the image to a running guest:
#virsh attach-disk ovmf /var/lib/libvirt/images/a.qcow2 sdb --cache none
Disk attached successfully

3.Check the image in the guest:
#virsh domblklist ovmf
Target     Source
------------------------------------------------
sda        /nfs-images/yafu/75.qcow2
sdb       /var/lib/libvirt/images/a.qcow2

4.Do storage migration:
## virsh migrate ovmf qemu+ssh://10.73.130.35/system --live --verbose  --copy-storage-all --migrate-disks sdb
error: Unable to read from monitor: Connection reset by peer

5.Check the qemu log on the source host:
# tail -n 5 /var/log/libvirt/qemu/ovmf.log
2018-06-07 06:27:22.421+0000: 26139: debug : virFileClose:110 : Closed fd 32
2018-06-07 06:27:22.421+0000: 26139: debug : virCommandHandshakeChild:462 : Handshake with parent is done
2018-06-07T06:27:22.467245Z qemu-kvm: -chardev pty,id=charua-04c2decd-8e33-4023-84de-a2205c777af7: char device redirected to /dev/pts/6 (label charua-04c2decd-8e33-4023-84de-a2205c777af7)
qemu-kvm: block/io.c:1993: bdrv_co_block_status: Assertion `*pnum && (((*pnum) % (align)) == 0) && align > offset - aligned_offset' failed.
2018-06-07 06:27:32.673+0000: shutting down, reason=crashed

6.Check the image size both on the source host and target host:
Source host:
# ll /var/lib/libvirt/images/a.qcow2 
-rw-r--r--. 1 root root 196624 Jun  7 14:26 /var/lib/libvirt/images/a.qcow2

Target host:
# ll /var/lib/libvirt/images/a.qcow2 
-rw-------. 1 root root 197120 Jun  7 02:09 /var/lib/libvirt/images/a.qcow2

Actual results:
qemu crashed on the source host when do storage migration with source qcow2 disk created by 'qemu-img'

Expected results:
Should do the migration successfully.

Additional info:
The backtrace of the crashed qemu process:
(gdb) bt
#0  0x00007f777d064207 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x00007f777d0658f8 in __GI_abort () at abort.c:90
#2  0x00007f777d05d026 in __assert_fail_base (fmt=0x7f777d1b8520 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=assertion@entry=0x564fce5323f8 "*pnum && (((*pnum) % (align)) == 0) && align > offset - aligned_offset", file=file@entry=0x564fce53215a "block/io.c", line=line@entry=1993, function=function@entry=0x564fce532760 <__PRETTY_FUNCTION__.25519> "bdrv_co_block_status") at assert.c:92
#3  0x00007f777d05d0d2 in __GI___assert_fail (assertion=assertion@entry=0x564fce5323f8 "*pnum && (((*pnum) % (align)) == 0) && align > offset - aligned_offset", file=file@entry=0x564fce53215a "block/io.c", line=line@entry=1993, function=function@entry=0x564fce532760 <__PRETTY_FUNCTION__.25519> "bdrv_co_block_status") at assert.c:101
#4  0x0000564fce3302c9 in bdrv_co_block_status (bs=0x564fd2ea0000, want_zero=want_zero@entry=true, offset=0, bytes=197120, pnum=pnum@entry=0x7f76a53cff38, map=map@entry=0x7f76a53cfd50, file=file@entry=0x7f76a53cfd58) at block/io.c:1992
#5  0x0000564fce3301ab in bdrv_co_block_status (bs=bs@entry=0x564fd2bf2800, want_zero=want_zero@entry=true, offset=offset@entry=0, bytes=197120, bytes@entry=262144, pnum=pnum@entry=0x7f76a53cff38, map=map@entry=0x0, file=file@entry=0x0)
    at block/io.c:2004
#6  0x0000564fce330395 in bdrv_block_status_above_co_entry (file=0x0, map=0x0, pnum=0x7f76a53cff38, bytes=262144, offset=0, want_zero=true, base=0x0, bs=<optimized out>) at block/io.c:2082
#7  0x0000564fce330395 in bdrv_block_status_above_co_entry (opaque=opaque@entry=0x7f76a53cfe20) at block/io.c:2112
#8  0x0000564fce330530 in bdrv_common_block_status_above (bs=bs@entry=0x564fd2bf2800, base=base@entry=0x0, want_zero=want_zero@entry=true, offset=offset@entry=0, bytes=<optimized out>, pnum=pnum@entry=0x7f76a53cff38, map=map@entry=0x0, file=file@entry=0x0) at block/io.c:2146
#9  0x0000564fce330715 in bdrv_block_status_above (bs=bs@entry=0x564fd2bf2800, base=base@entry=0x0, offset=offset@entry=0, bytes=<optimized out>, pnum=pnum@entry=0x7f76a53cff38, map=map@entry=0x0, file=file@entry=0x0) at block/io.c:2159
#10 0x0000564fce32cea6 in mirror_run (s=0x564fd8536000) at block/mirror.c:397
#11 0x0000564fce32cea6 in mirror_run (opaque=0x564fd8536000) at block/mirror.c:815
#12 0x0000564fce3c86fa in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at util/coroutine-ucontext.c:116
#13 0x00007f777d075fc0 in __start_context () at /lib64/libc.so.6
#14 0x00007f776a4c3b40 in  ()
#15 0x0000000000000000 in  ()

Comment 9 Hanna Czenczek 2019-05-14 14:52:38 UTC
Simpler reproducer without libvirt (on current upstream master):

$ x86_64-softmmu/qemu-system-x86_64 -qmp stdio -display none \
    -blockdev node-name=src,driver=raw,file.driver=file,file.filename=foo,file.cache.direct=on <<EOF
{'execute':'qmp_capabilities'}
{'execute':'drive-mirror','arguments':{'job-id':'mirror','device':'src','mode':'existing','target':'null-co://','sync':'full','format':'raw'}}
EOF
{"QMP": {"version": {"qemu": {"micro": 50, "minor": 0, "major": 4}, "package": "v4.0.0-473-ge329ad2ab7"}, "capabilities": ["oob"]}}
{"return": {}}
{"timestamp": {"seconds": 1557845520, "microseconds": 496092}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "mirror"}}
{"timestamp": {"seconds": 1557845520, "microseconds": 496172}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "mirror"}}
qemu-system-x86_64: block/io.c:2093: bdrv_co_block_status: Assertion `*pnum && QEMU_IS_ALIGNED(*pnum, align) && align > offset - aligned_offset' failed.
[1]    10028 abort (core dumped)  x86_64-softmmu/qemu-system-x86_64 -qmp stdio -display none -blockdev  <<<''

Comment 10 Hanna Czenczek 2019-05-14 14:59:37 UTC
Even simpler:

$ echo > foo
$ qemu-img map --image-opts driver=file,filename=foo,cache.direct=on
Offset          Length          Mapped to       File
qemu-img: block/io.c:2093: bdrv_co_block_status: Assertion `*pnum && QEMU_IS_ALIGNED(*pnum, align) && align > offset - aligned_offset' failed.
[1]    10954 abort (core dumped)  ./qemu-img map --image-opts driver=file,filename=foo,cache.direct=on

The problem is that with direct I/O, we require requests to be aligned at the disk’s sector size; but here, the file size is not unaligned.  This then results in the above assertion.

I’m not sure whether we should just reject unaligned files with direct I/O (actually seems very sensible to me), or whether it is OK to have an unaligned tail.

Max

Comment 11 Hanna Czenczek 2019-06-25 19:52:51 UTC
*** Bug 1678979 has been marked as a duplicate of this bug. ***

Comment 15 aihua liang 2019-07-24 07:20:56 UTC
Can reproduce it on qemu-kvm-rhev-2.12.0-3.el7.x86_64.

Reproduce steps:
 1.Start src guest by qemu cmds:
    /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1' \
    -machine pc  \
    -nodefaults \
    -device VGA,bus=pci.0,addr=0x2  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20190602-215744-4gqhxTV6,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20190602-215744-4gqhxTV6,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idKnSrhI  \
    -chardev socket,id=serial_id_serial0,path=/var/tmp/serial-serial0-20190602-215744-4gqhxTV6,server,nowait \
    -device isa-serial,chardev=serial_id_serial0  \
    -chardev socket,id=seabioslog_id_20190602-215744-4gqhxTV6,path=/var/tmp/seabios-20190602-215744-4gqhxTV6,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20190602-215744-4gqhxTV6,iobase=0x402 \
    -device nec-usb-xhci,id=usb1,bus=pci.0,addr=0x3 \
    -device virtio-scsi-pci,id=scsi0,addr=0x7 \
    -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/mnt/nfs/rhel77-64-virtio.qcow2 \
    -device scsi-hd,id=image1,drive=drive_image1,bootindex=0,bus=scsi0.0 \
    -device virtio-net-pci,mac=9a:fb:fc:fd:fe:ff,id=idrI84Jx,vectors=4,netdev=idZnFQVB,bus=pci.0,addr=0x5  \
    -netdev tap,id=idZnFQVB,vhost=on \
    -m 4096  \
    -smp 4,maxcpus=4,cores=2,threads=1,sockets=2  \
    -cpu 'Penryn',+kvm_pv_unhalt \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,strict=off,order=cdn,once=c \
    -enable-kvm \
    -monitor stdio \
    -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 \
    -msg timestamp=on \
    -sandbox on,obsolete=deny,elevateprivileges=deny,resourcecontrol=deny \
    -qmp tcp:0:3000,server,nowait \

 2.Start dst guest by qemu cmds:
    /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1' \
    -machine pc  \
    -nodefaults \
    -device VGA,bus=pci.0,addr=0x2  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20190602-215744-4gqhxTV7,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20190602-215744-4gqhxTV6,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idKnSrhI  \
    -chardev socket,id=serial_id_serial0,path=/var/tmp/serial-serial0-20190602-215744-4gqhxTV6,server,nowait \
    -device isa-serial,chardev=serial_id_serial0  \
    -chardev socket,id=seabioslog_id_20190602-215744-4gqhxTV6,path=/var/tmp/seabios-20190602-215744-4gqhxTV6,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20190602-215744-4gqhxTV6,iobase=0x402 \
    -device nec-usb-xhci,id=usb1,bus=pci.0,addr=0x3 \
    -device virtio-scsi-pci,id=scsi0,addr=0x7 \
    -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/mnt/nfs/rhel77-64-virtio.qcow2 \
    -device scsi-hd,id=image1,drive=drive_image1,bootindex=0,bus=scsi0.0 \
    -drive id=drive_data1,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/home/data.qcow2 \
    -device virtio-blk-pci,bus=pci.0,drive=drive_data1,id=data1 \
    -device virtio-net-pci,mac=9a:fb:fc:fd:fe:ff,id=idrI84Jx,vectors=4,netdev=idZnFQVB,bus=pci.0,addr=0x5  \
    -netdev tap,id=idZnFQVB,vhost=on \
    -m 4096  \
    -smp 4,maxcpus=4,cores=2,threads=1,sockets=2  \
    -cpu 'Penryn',+kvm_pv_unhalt \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,strict=off,order=cdn,once=c \
    -enable-kvm \
    -monitor stdio \
    -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 \
    -msg timestamp=on \
    -sandbox on,obsolete=deny,elevateprivileges=deny,resourcecontrol=deny \
    -qmp tcp:0:3000,server,nowait \
    -incoming tcp:0:5000 \

3. Start NBD Server in dst
   { "execute": "nbd-server-start", "arguments": { "addr": { "type": "inet","data": { "host": "10.66.144.42", "port": "3333" } } } }
   { "execute": "nbd-server-add", "arguments":{ "device": "drive_data1", "writable": true } }

4. Create a qcow2 data disk and hotplug it with raw format and cache=none
   {"execute":"__com.redhat_drive_add","arguments":{"file":"/home/data.qcow2","format":"raw","id":"drive_data1","format":"raw","cache":"none"}}
   {"execute":"device_add","arguments":{"driver":"virtio-blk-pci","drive":"drive_data1","id":"data1"}}

5. Mirror from src to dst.
   {"execute":"drive-mirror","arguments":{"device":"drive_data1","target":"nbd://10.66.144.42:3333/drive_data1","sync":"full","mode":"existing","format":"raw"}}

After step5, qemu core dump with info:
 Ncat: Connection reset by peer.
 (qemu) qemu-kvm: block/io.c:1993: bdrv_co_block_status: Assertion `*pnum && (((*pnum) % (align)) == 0) && align > offset - aligned_offset' failed.
de.txt: 行 34: 18075 Aborted               (Coredump)/usr/libexec/qemu-kvm -name 'avocado-vt-vm1' -machine pc -nodefaults -device VGA,bus=pci.0,addr=0x2 -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20190602-215744-4gqhxTV6,server,nowait ..

(gdb) bt
#0  0x00007fafe53ad377 in raise () at /lib64/libc.so.6
#1  0x00007fafe53aea68 in abort () at /lib64/libc.so.6
#2  0x00007fafe53a6196 in __assert_fail_base () at /lib64/libc.so.6
#3  0x00007fafe53a6242 in  () at /lib64/libc.so.6
#4  0x0000564e78d422c9 in bdrv_co_block_status (bs=0x564e7b33a000, want_zero=want_zero@entry=true, offset=0, bytes=197120, pnum=pnum@entry=0x7faed63f5f38, map=map@entry=0x7faed63f5d50, file=file@entry=0x7faed63f5d58) at block/io.c:1992
#5  0x0000564e78d421ab in bdrv_co_block_status (bs=bs@entry=0x564e7ad88800, want_zero=want_zero@entry=true, offset=offset@entry=0, bytes=197120, bytes@entry=262144, pnum=pnum@entry=0x7faed63f5f38, map=map@entry=0x0, file=file@entry=0x0)
    at block/io.c:2004
#6  0x0000564e78d42395 in bdrv_block_status_above_co_entry (file=0x0, map=0x0, pnum=0x7faed63f5f38, bytes=262144, offset=0, want_zero=true, base=0x0, bs=<optimized out>) at block/io.c:2082
#7  0x0000564e78d42395 in bdrv_block_status_above_co_entry (opaque=opaque@entry=0x7faed63f5e20) at block/io.c:2112
#8  0x0000564e78d42530 in bdrv_common_block_status_above (bs=bs@entry=0x564e7ad88800, base=base@entry=0x0, want_zero=want_zero@entry=true, offset=offset@entry=0, bytes=<optimized out>, pnum=pnum@entry=0x7faed63f5f38, map=map@entry=0x0, file=file@entry=0x0) at block/io.c:2146
#9  0x0000564e78d42715 in bdrv_block_status_above (bs=bs@entry=0x564e7ad88800, base=base@entry=0x0, offset=offset@entry=0, bytes=<optimized out>, pnum=pnum@entry=0x7faed63f5f38, map=map@entry=0x0, file=file@entry=0x0) at block/io.c:2159
#10 0x0000564e78d3eea6 in mirror_run (s=0x564e7acac5a0) at block/mirror.c:397
#11 0x0000564e78d3eea6 in mirror_run (opaque=0x564e7acac5a0) at block/mirror.c:815
#12 0x0000564e78dda6fa in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at util/coroutine-ucontext.c:116
#13 0x00007fafe53bf180 in __start_context () at /lib64/libc.so.6
#14 0x00007fff908aacf0 in  ()
#15 0x0000000000000000 in  ()


Will verify it on RHEL8.1 and give test result later.

Comment 16 aihua liang 2019-07-24 08:43:18 UTC
Verified it on qemu-kvm-4.0.0-6.module+el8.1.0+3736+a2aefea3.x86_64, the problem has been fixed, set its status to "Verified"

 Test steps:
   1.Start src guest without data disk.
      /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1' \
    -machine pc  \
    -nodefaults \
    -device VGA,bus=pci.0,addr=0x2  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20190602-215744-4gqhxTV6,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20190602-215744-4gqhxTV6,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idKnSrhI  \
    -chardev socket,id=serial_id_serial0,path=/var/tmp/serial-serial0-20190602-215744-4gqhxTV6,server,nowait \
    -device isa-serial,chardev=serial_id_serial0  \
    -chardev socket,id=seabioslog_id_20190602-215744-4gqhxTV6,path=/var/tmp/seabios-20190602-215744-4gqhxTV6,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20190602-215744-4gqhxTV6,iobase=0x402 \
    -device nec-usb-xhci,id=usb1,bus=pci.0,addr=0x3 \
    -device virtio-scsi-pci,id=scsi0,addr=0x7 \
    -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/mnt/nfs/rhel77-64-virtio.qcow2 \
    -device scsi-hd,id=image1,drive=drive_image1,bootindex=0,bus=scsi0.0 \
    -device virtio-net-pci,mac=9a:fb:fc:fd:fe:ff,id=idrI84Jx,vectors=4,netdev=idZnFQVB,bus=pci.0,addr=0x5  \
    -netdev tap,id=idZnFQVB,vhost=on \
    -m 4096  \
    -smp 4,maxcpus=4,cores=2,threads=1,sockets=2  \
    -cpu 'Penryn',+kvm_pv_unhalt \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,strict=off,order=cdn,once=c \
    -enable-kvm \
    -monitor stdio \
    -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 \
    -msg timestamp=on \
    -sandbox on,obsolete=deny,elevateprivileges=deny,resourcecontrol=deny \
    -qmp tcp:0:3000,server,nowait \

  2.In dst, create a qcow2 image, start it with raw format and cache=none.
     #qemu-img create -f qcow2 /home/data.qcow2 2G
     /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1' \
    -machine pc  \
    -nodefaults \
    -device VGA,bus=pci.0,addr=0x2  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20190602-215744-4gqhxTV7,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20190602-215744-4gqhxTV6,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idKnSrhI  \
    -chardev socket,id=serial_id_serial0,path=/var/tmp/serial-serial0-20190602-215744-4gqhxTV6,server,nowait \
    -device isa-serial,chardev=serial_id_serial0  \
    -chardev socket,id=seabioslog_id_20190602-215744-4gqhxTV6,path=/var/tmp/seabios-20190602-215744-4gqhxTV6,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20190602-215744-4gqhxTV6,iobase=0x402 \
    -device nec-usb-xhci,id=usb1,bus=pci.0,addr=0x3 \
    -device virtio-scsi-pci,id=scsi0,addr=0x7 \
    -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/mnt/nfs/rhel77-64-virtio.qcow2 \
    -device scsi-hd,id=image1,drive=drive_image1,bootindex=0,bus=scsi0.0 \
    -drive id=drive_data1,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/home/data.qcow2 \
    -device virtio-blk-pci,bus=pci.0,drive=drive_data1,id=data1 \
    -device virtio-net-pci,mac=9a:fb:fc:fd:fe:ff,id=idrI84Jx,vectors=4,netdev=idZnFQVB,bus=pci.0,addr=0x5  \
    -netdev tap,id=idZnFQVB,vhost=on \
    -m 4096  \
    -smp 4,maxcpus=4,cores=2,threads=1,sockets=2  \
    -cpu 'Penryn',+kvm_pv_unhalt \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,strict=off,order=cdn,once=c \
    -enable-kvm \
    -monitor stdio \
    -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 \
    -msg timestamp=on \
    -sandbox on,obsolete=deny,elevateprivileges=deny,resourcecontrol=deny \
    -qmp tcp:0:3000,server,nowait \
    -incoming tcp:0:5000 \

 3.In dst, start NBD Server and expose data disk
    { "execute": "nbd-server-start", "arguments": { "addr": { "type": "inet","data": { "host": "10.73.130.203", "port": "3333" } } } }
    { "execute": "nbd-server-add", "arguments":{ "device": "drive_data1", "writable": true } }

 4.In src, create a qcow2 image, hotplug it with raw image and cache=none
    #qemu-img create -f qcow2 data.qcow2 2G
    (hmp)drive_add auto if=none,file=/home/data.qcow2,format=raw,id=drive_data1,cache=none
    {"execute":"device_add","arguments":{"driver":"virtio-blk-pci","drive":"drive_data1","id":"data1"}}

 5.Mirror from src to dst.
    {"execute":"drive-mirror","arguments":{"device":"drive_data1","target":"nbd://10.73.130.203:3333/drive_data1","sync":"full","mode":"existing","format":"raw"}}
    {"timestamp": {"seconds": 1563956965, "microseconds": 632793}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "drive_data1"}}
{"timestamp": {"seconds": 1563956965, "microseconds": 632856}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "drive_data1"}}
{"return": {}}
{"timestamp": {"seconds": 1563956965, "microseconds": 638302}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "drive_data1"}}
{"timestamp": {"seconds": 1563956965, "microseconds": 638345}, "event": "BLOCK_JOB_READY", "data": {"device": "drive_data1", "len": 197120, "offset": 197120, "speed": 0, "type": "mirror"}}

 6.After it reaches ready status, set migration capabilities in both src and dst.
   {"execute":"migrate-set-capabilities","arguments":{"capabilities":[{"capability":"pause-before-switchover","state":true}]}}

 7.Migrate from src to dst.
   {"execute": "migrate","arguments":{"uri": "tcp:10.73.130.203:5000"}}

 8.Complete the job.
   { "execute": "block-job-cancel","arguments":{"device":"drive_data1"}}
   {"timestamp": {"seconds": 1563957163, "microseconds": 467488}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "drive_data1"}}
{"timestamp": {"seconds": 1563957163, "microseconds": 467543}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "drive_data1"}}
{"timestamp": {"seconds": 1563957163, "microseconds": 468733}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive_data1", "len": 197120, "offset": 197120, "speed": 0, "type": "mirror"}}
{"timestamp": {"seconds": 1563957163, "microseconds": 468787}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "drive_data1"}}
{"timestamp": {"seconds": 1563957163, "microseconds": 468817}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "drive_data1"}}

 9.Continue the migrate
   {"execute":"migrate-continue","arguments":{"state":"pre-switchover"}}

After step9, migration finished, src VM status: paused (postmigrate) and dst VM status: running.

Comment 19 errata-xmlrpc 2019-11-06 07:11:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3723


Note You need to log in before you can comment on or make changes to this bug.