RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1074913 - migration can not finish with 1024k 'remaining ram' left after hotunplug 4 nics
Summary: migration can not finish with 1024k 'remaining ram' left after hotunplug 4 nics
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm
Version: 7.0
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: rc
: ---
Assignee: Dr. David Alan Gilbert
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: RHEL7.0Virt-PostBeta(z-stream) 1110189
TreeView+ depends on / blocked
 
Reported: 2014-03-11 08:51 UTC by mazhang
Modified: 2016-09-20 04:40 UTC (History)
15 users (show)

Fixed In Version: qemu-kvm-1.5.3-63.el7
Doc Type: Bug Fix
Doc Text:
Previously, the QEMU migration code did not account for the gaps caused by hot unplugged devices and thus expected more memory to be transferred during migrations. As a consequence, a guest migration failed to complete after multiple devices were hot unplugged. In addition, the migration info text afterwards displayed erroneous values for the "remaining ram" item. With this update, QEMU calculates memory after a device has been unplugged correctly, and any subsequent guest migrations proceed as expected.
Clone Of:
: 1110189 (view as bug list)
Environment:
Last Closed: 2015-03-05 08:04:55 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Script that triggers this bug (2.83 KB, application/x-shellscript)
2014-03-20 12:42 UTC, Dr. David Alan Gilbert
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0349 0 normal SHIPPED_LIVE Important: qemu-kvm security, bug fix, and enhancement update 2015-03-05 12:27:34 UTC

Description mazhang 2014-03-11 08:51:41 UTC
Description of problem:
migration can not finish with 1024k 'remaining ram' left after hotunplug 4 nics in win8-32 guest.

Version-Release number of selected component (if applicable):

Host:
qemu-kvm-common-rhev-1.5.3-50.el7.x86_64
qemu-kvm-tools-rhev-1.5.3-50.el7.x86_64
qemu-kvm-rhev-1.5.3-50.el7.x86_64
qemu-guest-agent-1.5.3-50.el7.x86_64
qemu-img-rhev-1.5.3-50.el7.x86_64
qemu-kvm-rhev-debuginfo-1.5.3-50.el7.x86_64
kernel-3.10.0-99.el7.x86_64

Guest:
win8-32
virtio-win-prewhql-0.1-74

How reproducible:
100%

Steps to Reproduce:
1.boot vm:
/usr/libexec/qemu-kvm \
-M pc \
-cpu SandyBridge,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff \
-m 2G \
-smp 4,sockets=2,cores=2,threads=1,maxcpus=16 \
-enable-kvm \
-name win8 \
-uuid 990ea161-6b67-47b2-b803-19fb01d30d12 \
-smbios type=1,manufacturer='Red Hat',product='RHEV Hypervisor',version=el6,serial=koTUXQrb,uuid=feebc8fd-f8b0-4e75-abc3-e63fcdb67170 \
-k en-us \
-rtc base=localtime,clock=host,driftfix=slew \
-nodefaults \
-monitor stdio \
-qmp tcp:localhost:6666,server,nowait \
-boot menu=on \
-bios /usr/share/seabios/bios.bin \
-vga cirrus \
-vnc :0 \
-drive file=/home/win8-32.raw,if=none,id=drive-virtio-disk0,format=raw,cache=none,werror=stop,rerror=stop,aio=threads \
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=0 \
-netdev tap,id=hostnet0,vhost=on,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown,ifname=guest0 \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:13:10:10,bus=pci.0,addr=0x4 \
-netdev tap,id=hostnet1,vhost=on,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown,ifname=guest1 \
-device virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:00:13:10:11,bus=pci.0,addr=0x5 \
-netdev tap,id=hostnet2,vhost=on,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown,ifname=guest2 \
-device virtio-net-pci,netdev=hostnet2,id=net2,mac=52:54:00:13:10:12,bus=pci.0,addr=0x6 \
-netdev tap,id=hostnet3,vhost=on,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown,ifname=guest3 \
-device virtio-net-pci,netdev=hostnet3,id=net3,mac=52:54:00:13:10:13,bus=pci.0,addr=0x7 \

2.Hotunplug all nics.
{"execute": "device_del", "arguments": {"id": "net0"}}
{"return": {}}
{"execute": "device_del", "arguments": {"id": "net{"timestamp": {"seconds": 1394527294, "microseconds": 201117}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net0/virtio-backend"}}
{"timestamp": {"seconds": 1394527294, "microseconds": 201265}, "event": "DEVICE_DELETED", "data": {"device": "net0", "path": "/machine/peripheral/net0"}}
1"}}
{"return": {}}
{"timestamp": {"seconds": 1394527297, "microseconds": 757432}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net1/virtio-backend"}}
{"timestamp": {"seconds": 1394527297, "microseconds": 757582}, "event": "DEVICE_DELETED", "data": {"device": "net1", "path": "/machine/peripheral/net1"}}
{"execute": "device_del", "arguments": {"id": "net2"}}
{"return": {}}
{"timestamp": {"seconds": 1394527308, "microseconds": 975426}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net2/virtio-backend"}}
{"timestamp": {"seconds": 1394527308, "microseconds": 975581}, "event": "DEVICE_DELETED", "data": {"device": "net2", "path": "/machine/peripheral/net2"}}
{"execute": "device_del", "arguments": {"id": "net3"}}
{"return": {}}
{"timestamp": {"seconds": 1394527315, "microseconds": 516418}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net3/virtio-backend"}}
{"timestamp": {"seconds": 1394527315, "microseconds": 516613}, "event": "DEVICE_DELETED", "data": {"device": "net3", "path": "/machine/peripheral/net3"}}
{"execute": "netdev_del", "arguments": {"id": "hostnet0"}}
{"return": {}}
{"execute": "netdev_del", "arguments": {"id": "hostnet2"}}
{"return": {}}
{"execute": "netdev_del", "arguments": {"id": "hostnet1"}}
{"return": {}}
{"execute": "netdev_del", "arguments": {"id": "hostnet3"}}
{"return": {}}

3.Migrate guest to remote host.
{"execute": "migrate","arguments":{"uri": "tcp:[2001::8a51:fbff:fe71:5249]:5800"}}
{"return": {}}
{"execute":"query-migrate"}
{"return": {"expected-downtime": 30, "status": "active", "setup-time": 1, "total-time": 7311, "ram": {"total": 2164617216, "remaining": 1765462016, "mbps": 268.58768, "transferred": 245095755, "duplicate": 37523, "dirty-pages-rate": 0, "skipped": 0, "normal-bytes": 246505472, "normal": 60182}}}


Actual results:
Migration can not finish, remaining ram keep 1048576.
{"return": {"expected-downtime": 30, "status": "active", "setup-time": 1, "total-time": 81471, "ram": {"total": 2164617216, "remaining": 1048576, "mbps": 42.27912, "transferred": 662209480, "duplicate": 422264, "dirty-pages-rate": 0, "skipped": 0, "normal-bytes": 435023872, "normal": 106207}}}
{"execute":"query-migrate"}
{"return": {"expected-downtime": 30, "status": "active", "setup-time": 1, "total-time": 83231, "ram": {"total": 2164617216, "remaining": 1048576, "mbps": 42.78248, "transferred": 667987272, "duplicate": 422264, "dirty-pages-rate": 0, "skipped": 0, "normal-bytes": 435023872, "normal": 106207}}}
{"execute":"query-migrate"}
{"return": {"expected-downtime": 30, "status": "active", "setup-time": 1, "total-time": 84815, "ram": {"total": 2164617216, "remaining": 1048576, "mbps": 42.172, "transferred": 673177552, "duplicate": 422264, "dirty-pages-rate": 0, "skipped": 0, "normal-bytes": 435023872, "normal": 106207}}}
{"execute":"query-migrate"}
{"return": {"expected-downtime": 30, "status": "active", "setup-time": 1, "total-time": 86423, "ram": {"total": 2164617216, "remaining": 1048576, "mbps": 42.30928, "transferred": 678407312, "duplicate": 422264, "dirty-pages-rate": 0, "skipped": 0, "normal-bytes": 435023872, "normal": 106207}}}


Expected results:
Migration works.

Additional info:
Less 4 nics hotunpluged not hit this problem.
RHEL7 guest works well.

Comment 1 Karen Noel 2014-03-14 17:03:39 UTC
Please specify the QEMU command line on the destination. Does it include the 4 NICs or not? 

What happens with 1 NIC unplugged?

Can you re-explain this comment? It is not clear:

> Additional info:
> Less 4 nics hotunpluged not hit this problem.
> RHEL7 guest works well.

Does this mean migrating with the 4 NICs (not unplugged) works? 

Thanks.

Comment 2 mazhang 2014-03-17 01:33:48 UTC
1 With 1, 2, 3 nics unplugged works well.
2 RHEL7 guest 4 nics unplugged migration works well.

Comment 3 mazhang 2014-03-17 01:50:25 UTC
(In reply to Karen Noel from comment #1)
> Please specify the QEMU command line on the destination. Does it include the
> 4 NICs or not? 

Destination command line:
/usr/libexec/qemu-kvm \
-M pc \
-cpu SandyBridge,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff \
-m 2G \
-smp 4,sockets=2,cores=2,threads=1,maxcpus=16 \
-enable-kvm \
-name win8 \
-uuid 990ea161-6b67-47b2-b803-19fb01d30d12 \
-smbios type=1,manufacturer='Red Hat',product='RHEV Hypervisor',version=el6,serial=koTUXQrb,uuid=feebc8fd-f8b0-4e75-abc3-e63fcdb67170 \
-k en-us \
-rtc base=localtime,clock=host,driftfix=slew \
-nodefaults \
-monitor stdio \
-qmp tcp:localhost:6666,server,nowait \
-boot menu=on \
-bios /usr/share/seabios/bios.bin \
-vga cirrus \
-vnc :0 \
-drive file=/home/win8-32.raw,if=none,id=drive-virtio-disk0,format=raw,cache=none,werror=stop,rerror=stop,aio=threads \
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=0 \
-incoming tcp:0:5800 \

Comment 4 Dr. David Alan Gilbert 2014-03-17 10:13:39 UTC
Does this only happen with Windows guests or does it also happen with RHEL7 guests?

If you start off with 5 NICs:
   a) Does it fail if you remove 4?
   b) Does it fail if you remove 5?

Dave

Comment 5 mazhang 2014-03-18 06:28:05 UTC
(In reply to Dr. David Alan Gilbert from comment #4)
> Does this only happen with Windows guests or does it also happen with RHEL7
> guests?
> 

RHEL7 guest hit this problem.

> If you start off with 5 NICs:
>    a) Does it fail if you remove 4?

This scenario failed.

>    b) Does it fail if you remove 5?
> 

Also failed, and the "remaining ram" change to "1280 kbytes"
(qemu) info migrate
capabilities: xbzrle: off x-rdma-pin-all: off auto-converge: off zero-blocks: off
Migration status: active
total time: 58352 milliseconds
expected downtime: 30 milliseconds
setup: 1 milliseconds
transferred ram: 718901 kbytes
throughput: 17.63 mbps
remaining ram: 1280 kbytes
total ram: 2113872 kbytes
duplicate: 362411 pages
skipped: 0 pages
normal: 166057 pages
normal bytes: 664228 kbytes
(qemu) info migrate
capabilities: xbzrle: off x-rdma-pin-all: off auto-converge: off zero-blocks: off
Migration status: active
total time: 59160 milliseconds
expected downtime: 30 milliseconds
setup: 1 milliseconds
transferred ram: 719970 kbytes
throughput: 17.60 mbps
remaining ram: 1280 kbytes
total ram: 2113872 kbytes
duplicate: 362411 pages
skipped: 0 pages
normal: 166057 pages
normal bytes: 664228 kbytes


> Dave

Comment 6 mazhang 2014-03-18 06:38:24 UTC
Start vm with 6 nics, then remove 6 nics, do migration, also failed.

(qemu) info migrate
capabilities: xbzrle: off x-rdma-pin-all: off auto-converge: off zero-blocks: off 
Migration status: active
total time: 106287 milliseconds
expected downtime: 30 milliseconds
setup: 1 milliseconds
transferred ram: 728659 kbytes
throughput: 17.58 mbps
remaining ram: 1536 kbytes       <----- remaining ram increased.
total ram: 2113872 kbytes
duplicate: 376342 pages
skipped: 0 pages
normal: 152126 pages
normal bytes: 608504 kbytes
(qemu) info migrate
capabilities: xbzrle: off x-rdma-pin-all: off auto-converge: off zero-blocks: off 
Migration status: active
total time: 106863 milliseconds
expected downtime: 30 milliseconds
setup: 1 milliseconds
transferred ram: 729419 kbytes
throughput: 17.58 mbps
remaining ram: 1536 kbytes
total ram: 2113872 kbytes
duplicate: 376342 pages
skipped: 0 pages
normal: 152126 pages
normal bytes: 608504 kbytes

Comment 7 Dr. David Alan Gilbert 2014-03-18 09:09:58 UTC
OK, so it sounds like we're losing 256KB for each unplugged NIC.

I'll have a look.

Dave

Comment 8 Dr. David Alan Gilbert 2014-03-20 12:42:26 UTC
Created attachment 876802 [details]
Script that triggers this bug

The attached script triggers it on RHEL7 with either our qemu or upstream 2.0.0-rc0

Comment 9 Dr. David Alan Gilbert 2014-03-21 11:04:19 UTC
Fix posted upstream

Comment 11 Libor Miksik 2014-06-17 08:05:33 UTC
providing pm_ack+ based on GSS approval

Comment 13 Miroslav Rezanina 2014-06-17 12:40:31 UTC
Fix included in qemu-kvm-1.5.3-63.el7

Comment 14 huiqingding 2014-06-24 06:57:48 UTC
Verify this bug using the following version:
qemu-kvm-1.5.3-64.el7.x86_64
kernel-3.10.0-128.el7.x86_64

Steps to Verify:
1. boot a win8-32 guest on src host
/usr/libexec/qemu-kvm \
-M pc \
-cpu SandyBridge,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff \
-m 2G \
-smp 4,sockets=2,cores=2,threads=1,maxcpus=16 \
-enable-kvm \
-name win8 \
-uuid 990ea161-6b67-47b2-b803-19fb01d30d12 \
-smbios type=1,manufacturer='Red Hat',product='RHEV Hypervisor',version=el6,serial=koTUXQrb,uuid=feebc8fd-f8b0-4e75-abc3-e63fcdb67170 \
-k en-us \
-rtc base=localtime,clock=host,driftfix=slew \
-nodefaults \
-monitor stdio \
-qmp tcp:localhost:6666,server,nowait \
-boot menu=on \
-bios /usr/share/seabios/bios.bin \
-vga cirrus \
-vnc :0 \
-drive file=/home/win8-32-virtio.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,werror=stop,rerror=stop,aio=threads \
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=0 \
-netdev tap,id=hostnet0,vhost=on,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown,ifname=guest0 \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:13:10:10,bus=pci.0,addr=0x4 \
-netdev tap,id=hostnet1,vhost=on,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown,ifname=guest1 \
-device virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:00:13:10:11,bus=pci.0,addr=0x5 \
-netdev tap,id=hostnet2,vhost=on,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown,ifname=guest2 \
-device virtio-net-pci,netdev=hostnet2,id=net2,mac=52:54:00:13:10:12,bus=pci.0,addr=0x6 \
-netdev tap,id=hostnet3,vhost=on,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown,ifname=guest3 \
-device virtio-net-pci,netdev=hostnet3,id=net3,mac=52:54:00:13:10:13,bus=pci.0,addr=0x7 \
2. unhotplug four nics
# telnet localhost 6666
Trying ::1...
Connected to localhost.
Escape character is '^]'.
{"QMP": {"version": {"qemu": {"micro": 3, "minor": 5, "major": 1}, "package": " (qemu-kvm-1.5.3-60.el7)"}, "capabilities": []}}
{"execute":"qmp_capabilities"}
{"return": {}}
{"execute": "device_del", "arguments": {"id": "net0"}}
{"return": {}}
{"timestamp": {"seconds": 1403250008, "microseconds": 416150}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net0/virtio-backend"}}
{"timestamp": {"seconds": 1403250008, "microseconds": 416302}, "event": "DEVICE_DELETED", "data": {"device": "net0", "path": "/machine/peripheral/net0"}}
{"execute": "device_del", "arguments": {"id": "net1"}}
{"return": {}}
{"timestamp": {"seconds": 1403250012, "microseconds": 251351}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net1/virtio-backend"}}
{"timestamp": {"seconds": 1403250012, "microseconds": 251507}, "event": "DEVICE_DELETED", "data": {"device": "net1", "path": "/machine/peripheral/net1"}}
{"execute": "device_del", "arguments": {"id": "net2"}}
{"return": {}}
{"timestamp": {"seconds": 1403250017, "microseconds": 624422}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net2/virtio-backend"}}
{"timestamp": {"seconds": 1403250017, "microseconds": 624588}, "event": "DEVICE_DELETED", "data": {"device": "net2", "path": "/machine/peripheral/net2"}}
{"execute": "device_del", "arguments": {"id": "net3"}}
{"return": {}}
{"timestamp": {"seconds": 1403250021, "microseconds": 721016}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net3/virtio-backend"}}
{"timestamp": {"seconds": 1403250021, "microseconds": 721165}, "event": "DEVICE_DELETED", "data": {"device": "net3", "path": "/machine/peripheral/net3"}}
{"execute": "netdev_del", "arguments": {"id": "hostnet0"}}
{"return": {}}
{"execute": "netdev_del", "arguments": {"id": "hostnet1"}}
{"return": {}}
{"execute": "netdev_del", "arguments": {"id": "hostnet2"}}
{"return": {}}
{"execute": "netdev_del", "arguments": {"id": "hostnet3"}}
{"return": {}}
3. do migration 
(qemu) migrate -d tcp:10.66.9.152:5800
4. check migration status in src qemu-kvm

Actual results:
migration can finish normally, remaining ram is 0k.
(qemu) info migrate
capabilities: xbzrle: off x-rdma-pin-all: off auto-converge: off zero-blocks: off 
Migration status: completed
total time: 57862 milliseconds
downtime: 32 milliseconds
setup: 10 milliseconds
transferred ram: 663613 kbytes
throughput: 98.01 mbps
remaining ram: 0 kbytes
total ram: 2113872 kbytes
duplicate: 452422 pages
skipped: 0 pages
normal: 164586 pages
normal bytes: 658344 kbytes
(qemu) info status
VM status: paused (postmigrate)


Additional info:
I also test the following scenarios, migration can finish normally, remaining ram is 0k.
1). boot win8-32 guest, unhotplug 5/6 nics then do migration
2). boot rhel7 guest, unhotplug 4/5/6 nics then do migration

Comment 15 huiqingding 2014-08-07 08:44:10 UTC
Test this bug on an amd host using the following version:
qemu-kvm-rhev-2.1.0-3.el7ev.preview.x86_64
kernel-3.10.0-142.el7.x86_64

Steps to Test:
1. boot a win8-32 guest on src host
/usr/libexec/qemu-kvm \
-M pc \
-cpu Opteron_G2,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff \
-m 2G \
-smp 4,sockets=2,cores=2,threads=1,maxcpus=16 \
-enable-kvm \
-name win8 \
-uuid 990ea161-6b67-47b2-b803-19fb01d30d12 \
-smbios type=1,manufacturer='Red Hat',product='RHEV Hypervisor',version=el6,serial=koTUXQrb,uuid=feebc8fd-f8b0-4e75-abc3-e63fcdb67170 \
-k en-us \
-rtc base=localtime,clock=host,driftfix=slew \
-nodefaults \
-monitor stdio \
-qmp tcp:localhost:6666,server,nowait \
-boot menu=on \
-bios /usr/share/seabios/bios.bin \
-vga cirrus \
-vnc :0 \
-drive file=/mnt/win8-32-virtio.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=writethrough,werror=stop,rerror=stop,aio=threads \
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=0 \
-netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:13:10:10,bus=pci.0,addr=0x4 \
-netdev tap,id=hostnet1,vhost=on,script=/etc/qemu-ifup \
-device virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:00:13:10:11,bus=pci.0,addr=0x5 \
-netdev tap,id=hostnet2,vhost=on,script=/etc/qemu-ifup \
-device virtio-net-pci,netdev=hostnet2,id=net2,mac=52:54:00:13:10:12,bus=pci.0,addr=0x6 \
-netdev tap,id=hostnet3,vhost=on,script=/etc/qemu-ifup \
-device virtio-net-pci,netdev=hostnet3,id=net3,mac=52:54:00:13:10:13,bus=pci.0,addr=0x7 \

2. unhotplug four nics
# telnet localhost 6666
Trying ::1...
Connected to localhost.
Escape character is '^]'.
{"QMP": {"version": {"qemu": {"micro": 3, "minor": 5, "major": 1}, "package": " (qemu-kvm-1.5.3-60.el7)"}, "capabilities": []}}
{"execute":"qmp_capabilities"}
{"return": {}}
{"execute": "device_del", "arguments": {"id": "net0"}}
{"return": {}}
{"timestamp": {"seconds": 1403250008, "microseconds": 416150}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net0/virtio-backend"}}
{"timestamp": {"seconds": 1403250008, "microseconds": 416302}, "event": "DEVICE_DELETED", "data": {"device": "net0", "path": "/machine/peripheral/net0"}}
{"execute": "device_del", "arguments": {"id": "net1"}}
{"return": {}}
{"timestamp": {"seconds": 1403250012, "microseconds": 251351}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net1/virtio-backend"}}
{"timestamp": {"seconds": 1403250012, "microseconds": 251507}, "event": "DEVICE_DELETED", "data": {"device": "net1", "path": "/machine/peripheral/net1"}}
{"execute": "device_del", "arguments": {"id": "net2"}}
{"return": {}}
{"timestamp": {"seconds": 1403250017, "microseconds": 624422}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net2/virtio-backend"}}
{"timestamp": {"seconds": 1403250017, "microseconds": 624588}, "event": "DEVICE_DELETED", "data": {"device": "net2", "path": "/machine/peripheral/net2"}}
{"execute": "device_del", "arguments": {"id": "net3"}}
{"return": {}}
{"timestamp": {"seconds": 1403250021, "microseconds": 721016}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net3/virtio-backend"}}
{"timestamp": {"seconds": 1403250021, "microseconds": 721165}, "event": "DEVICE_DELETED", "data": {"device": "net3", "path": "/machine/peripheral/net3"}}
{"execute": "netdev_del", "arguments": {"id": "hostnet0"}}
{"return": {}}
{"execute": "netdev_del", "arguments": {"id": "hostnet1"}}
{"return": {}}
{"execute": "netdev_del", "arguments": {"id": "hostnet2"}}
{"return": {}}
{"execute": "netdev_del", "arguments": {"id": "hostnet3"}}
{"return": {}}

3. do migration 
(qemu) migrate -d tcp:10.66.9.152:5800

4. check migration status in src qemu-kvm

Actual results:
migration can finish normally, remaining ram is 0k.
(qemu) info migrate
capabilities: xbzrle: off x-rdma-pin-all: off auto-converge: off zero-blocks: off 
Migration status: completed
total time: 20641 milliseconds
downtime: 3 milliseconds
setup: 22 milliseconds
transferred ram: 668477 kbytes
throughput: 268.57 mbps
remaining ram: 0 kbytes
total ram: 2113872 kbytes
duplicate: 415898 pages
skipped: 0 pages
normal: 165881 pages
normal bytes: 663524 kbytes

Additional info:
I also test the following scenario, migration can finish normally, remaining ram is 0k.
1). boot win8-32 guest, unhotplug 5/6 nics then do migration

Comment 16 huiqingding 2014-08-08 07:12:40 UTC
Test this bug on an intel host using the following version:
qemu-kvm-rhev-2.1.0-3.el7ev.preview.x86_64
kernel-3.10.0-140.el7.x86_64

Steps to Test:
1. boot a rhel7.1 guest on src host
/usr/libexec/qemu-kvm \
-M pc \
-cpu SandyBridge \
-m 2G \
-smp 4,sockets=2,cores=2,threads=1,maxcpus=16 \
-enable-kvm \
-name win8 \
-uuid 990ea161-6b67-47b2-b803-19fb01d30d12 \
-smbios type=1,manufacturer='Red Hat',product='RHEV Hypervisor',version=el6,serial=koTUXQrb,uuid=feebc8fd-f8b0-4e75-abc3-e63fcdb67170 \
-k en-us \
-rtc base=localtime,clock=host,driftfix=slew \
-nodefaults \
-monitor stdio \
-qmp tcp:localhost:6666,server,nowait \
-boot menu=on \
-bios /usr/share/seabios/bios.bin \
-vga cirrus \
-vnc :0 \
-drive file=/mnt/rhel7_1.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=writethrough,werror=stop,rerror=stop,aio=threads \
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=0 \
-netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:13:10:10,bus=pci.0,addr=0x4 \
-netdev tap,id=hostnet1,vhost=on,script=/etc/qemu-ifup \
-device virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:00:13:10:11,bus=pci.0,addr=0x5 \
-netdev tap,id=hostnet2,vhost=on,script=/etc/qemu-ifup \
-device virtio-net-pci,netdev=hostnet2,id=net2,mac=52:54:00:13:10:12,bus=pci.0,addr=0x6 \
-netdev tap,id=hostnet3,vhost=on,script=/etc/qemu-ifup \
-device virtio-net-pci,netdev=hostnet3,id=net3,mac=52:54:00:13:10:13,bus=pci.0,addr=0x7 \

2. unhotplug four nics
# telnet localhost 6666
Trying ::1...
Connected to localhost.
Escape character is '^]'.
{"QMP": {"version": {"qemu": {"micro": 3, "minor": 5, "major": 1}, "package": " (qemu-kvm-1.5.3-60.el7)"}, "capabilities": []}}
{"execute":"qmp_capabilities"}
{"return": {}}
{"execute": "device_del", "arguments": {"id": "net0"}}
{"return": {}}
{"timestamp": {"seconds": 1403250008, "microseconds": 416150}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net0/virtio-backend"}}
{"timestamp": {"seconds": 1403250008, "microseconds": 416302}, "event": "DEVICE_DELETED", "data": {"device": "net0", "path": "/machine/peripheral/net0"}}
{"execute": "device_del", "arguments": {"id": "net1"}}
{"return": {}}
{"timestamp": {"seconds": 1403250012, "microseconds": 251351}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net1/virtio-backend"}}
{"timestamp": {"seconds": 1403250012, "microseconds": 251507}, "event": "DEVICE_DELETED", "data": {"device": "net1", "path": "/machine/peripheral/net1"}}
{"execute": "device_del", "arguments": {"id": "net2"}}
{"return": {}}
{"timestamp": {"seconds": 1403250017, "microseconds": 624422}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net2/virtio-backend"}}
{"timestamp": {"seconds": 1403250017, "microseconds": 624588}, "event": "DEVICE_DELETED", "data": {"device": "net2", "path": "/machine/peripheral/net2"}}
{"execute": "device_del", "arguments": {"id": "net3"}}
{"return": {}}
{"timestamp": {"seconds": 1403250021, "microseconds": 721016}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/net3/virtio-backend"}}
{"timestamp": {"seconds": 1403250021, "microseconds": 721165}, "event": "DEVICE_DELETED", "data": {"device": "net3", "path": "/machine/peripheral/net3"}}
{"execute": "netdev_del", "arguments": {"id": "hostnet0"}}
{"return": {}}
{"execute": "netdev_del", "arguments": {"id": "hostnet1"}}
{"return": {}}
{"execute": "netdev_del", "arguments": {"id": "hostnet2"}}
{"return": {}}
{"execute": "netdev_del", "arguments": {"id": "hostnet3"}}
{"return": {}}

3. do migration 
(qemu) migrate -d tcp:10.66.9.152:5800

4. check migration status in src qemu-kvm

Actual results:
migration can finish normally, remaining ram is 0k.
(qemu) info migrate
capabilities: xbzrle: off x-rdma-pin-all: off auto-converge: off zero-blocks: off 
Migration status: completed
total time: 20641 milliseconds
downtime: 3 milliseconds
setup: 22 milliseconds
transferred ram: 668477 kbytes
throughput: 268.57 mbps
remaining ram: 0 kbytes
total ram: 2113872 kbytes
duplicate: 415898 pages
skipped: 0 pages
normal: 165881 pages
normal bytes: 663524 kbytes

Additional info:
I also test the following scenario, migration can finish normally, remaining ram is 0k.
1). boot rhel7.1 guest, unhotplug 5/6 nics then do migration

Comment 20 errata-xmlrpc 2015-03-05 08:04:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0349.html


Note You need to log in before you can comment on or make changes to this bug.