Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 956942

Summary: Guest w/ vhost=on over virtio-net-pci, under hmp, 'set_link $id_of_netdev off', then migrate, migrate failed, src qemu-kvm process core dumped
Product: Red Hat Enterprise Linux 7 Reporter: Qian Guo <qiguo>
Component: qemu-kvmAssignee: jason wang <jasowang>
Status: CLOSED CURRENTRELEASE QA Contact: Virtualization Bugs <virt-bugs>
Severity: high Docs Contact:
Priority: high    
Version: 7.0CC: acathrow, chayang, hhuang, jasowang, juli, juzhang, michen, mrezanin, qzhang, virt-maint, xwei
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: qemu-kvm-1.5.0-1.el7 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 957319 (view as bug list) Environment:
Last Closed: 2014-06-13 11:34:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 957319    

Description Qian Guo 2013-04-26 03:49:19 UTC
Description of problem:
Boot a RHEL7 guest, v/ virtio-net-pci and vhost=on, after bootup, do 'set_link $id_of_netdev off', and then do migrate, migrate failed, and src qemu-kvm process core dumped.

(qemu) qemu-kvm: /builddir/build/BUILD/qemu-1.4.0/hw/virtio-net.c:1125: virtio_net_save: Assertion `!n->vhost_started' failed.

If w/o vhost=on, will migrate successfully.

Version-Release number of selected component (if applicable):
# uname -r
3.9.0-0.rc7.53.el7.x86_64
# rpm -q qemu-kvm
qemu-kvm-1.4.0-2.1.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Boot up a rhel7 guest on src host, and on dest host, run a listening qemu-kvm process

on src host:
# /usr/libexec/qemu-kvm -cpu host -m 4G -smp 8,sockets=1,cores=8,threads=1 -M pc -enable-kvm -name win2012 -drive file=/home/rhel7cp2.qcow3,if=none,format=qcow2,werror=stop,rerror=stop,media=disk,id=drive-scsi0-disk0 -device virtio-scsi-pci,id=scsi0,addr=0x4 -device scsi-hd,scsi-id=0,lun=0,bus=scsi0.0,drive=drive-scsi0-disk0,id=virtio-disk0 -nodefaults -nodefconfig -monitor stdio -netdev tap,id=bd,script=/etc/qemu-ifup,vhost=on,ifname=qiguo1 -device virtio-net-pci,netdev=bd,mac=54:52:1a:46:0b:02,id=vnic1 -vnc :20 -vga std -boot menu=on -device virtio-balloon-pci,id=balloon1

2.After guest bootup, under HMP of src host, set_link off the tap, and then migrate to dst host.
(qemu) set_link bd off

(qemu) migrate -d tcp:10.66.106.10:4444

3.Wait for some time
  
Actual results:
Migration could not successfully and src qemu-kvm core dumped
(qemu) qemu-kvm: /builddir/build/BUILD/qemu-1.4.0/hw/virtio-net.c:1125: virtio_net_save: Assertion `!n->vhost_started' failed.


(gdb) bt
#0  0x00007ffff288a819 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x00007ffff288bf28 in __GI_abort () at abort.c:90
#2  0x00007ffff28837f6 in __assert_fail_base (fmt=0x7ffff29d22e8 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", 
    assertion=assertion@entry=0x5555558c8d76 "!n->vhost_started", file=file@entry=0x5555558c8ad8 "/builddir/build/BUILD/qemu-1.4.0/hw/virtio-net.c", line=line@entry=1125, 
    function=function@entry=0x5555558c8f10 <__PRETTY_FUNCTION__.22997> "virtio_net_save") at assert.c:92
#3  0x00007ffff28838a2 in __GI___assert_fail (assertion=assertion@entry=0x5555558c8d76 "!n->vhost_started", 
    file=file@entry=0x5555558c8ad8 "/builddir/build/BUILD/qemu-1.4.0/hw/virtio-net.c", line=line@entry=1125, 
    function=function@entry=0x5555558c8f10 <__PRETTY_FUNCTION__.22997> "virtio_net_save") at assert.c:101
#4  0x00005555557a1cdc in virtio_net_save (f=<optimized out>, opaque=0x7ffebcdec010) at /usr/src/debug/qemu-1.4.0/hw/virtio-net.c:1125
#5  0x00005555557bf3d4 in vmstate_save (se=0x5555567b79e0, f=0x555556bad3f0) at /usr/src/debug/qemu-1.4.0/savevm.c:1553
#6  qemu_savevm_state_complete (f=0x555556bad3f0) at /usr/src/debug/qemu-1.4.0/savevm.c:1733
#7  0x00005555556edfde in buffered_file_thread (opaque=0x555555c5d660 <current_migration.19549>) at migration.c:711
#8  0x00007ffff626dc53 in start_thread (arg=0x7ffeaf1f1700) at pthread_create.c:308
#9  0x00007ffff2949ecd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

(gdb) bt full
#0  0x00007ffff288a819 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
        resultvar = 0
        pid = 6350
        selftid = 6466
#1  0x00007ffff288bf28 in __GI_abort () at abort.c:90
        save_stage = 2
        act = {__sigaction_handler = {sa_handler = 0x7fffffffe55b, sa_sigaction = 0x7fffffffe55b}, sa_mask = {__val = {140737263760393, 93824995855064, 1125, 
              93825011473976, 140737262407989, 4, 140731836467744, 93825015665648, 93824994761933, 48682875872, 0, 0, 0, 21474836480, 140737354031104, 140737263772392}}, 
          sa_flags = 1435274614, sa_restorer = 0x5555558c8f10 <__PRETTY_FUNCTION__.22997>}
        sigs = {__val = {32, 0 <repeats 15 times>}}
#2  0x00007ffff28837f6 in __assert_fail_base (fmt=0x7ffff29d22e8 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", 
    assertion=assertion@entry=0x5555558c8d76 "!n->vhost_started", file=file@entry=0x5555558c8ad8 "/builddir/build/BUILD/qemu-1.4.0/hw/virtio-net.c", line=line@entry=1125, 
    function=function@entry=0x5555558c8f10 <__PRETTY_FUNCTION__.22997> "virtio_net_save") at assert.c:92
        str = 0x7fffe4035270 "qemu-kvm: /builddir/build/BUILD/qemu-1.4.0/hw/virtio-net.c:1125: virtio_net_save: Assertion `!n->vhost_started' failed.\n"
        total = 4096
#3  0x00007ffff28838a2 in __GI___assert_fail (assertion=assertion@entry=0x5555558c8d76 "!n->vhost_started", 
    file=file@entry=0x5555558c8ad8 "/builddir/build/BUILD/qemu-1.4.0/hw/virtio-net.c", line=line@entry=1125, 
    function=function@entry=0x5555558c8f10 <__PRETTY_FUNCTION__.22997> "virtio_net_save") at assert.c:101
No locals.
#4  0x00005555557a1cdc in virtio_net_save (f=<optimized out>, opaque=0x7ffebcdec010) at /usr/src/debug/qemu-1.4.0/hw/virtio-net.c:1125
        i = <optimized out>
        n = 0x7ffebcdec010
        __PRETTY_FUNCTION__ = "virtio_net_save"
#5  0x00005555557bf3d4 in vmstate_save (se=0x5555567b79e0, f=0x555556bad3f0) at /usr/src/debug/qemu-1.4.0/savevm.c:1553
        se = 0x5555567b79e0
        f = 0x555556bad3f0
#6  qemu_savevm_state_complete (f=0x555556bad3f0) at /usr/src/debug/qemu-1.4.0/savevm.c:1733
        len = <optimized out>
        se = 0x5555567b79e0
        ret = <optimized out>
#7  0x00005555556edfde in buffered_file_thread (opaque=0x555555c5d660 <current_migration.19549>) at migration.c:711
        old_vm_running = <optimized out>
        start_time = 319675132
        end_time = <optimized out>
        current_time = 319675116
        pending_size = <optimized out>
        s = 0x555555c5d660 <current_migration.19549>
        initial_time = 319675114
        max_size = 1006620
        last_round = <optimized out>
        ret = <optimized out>
---Type <return> to continue, or q <return> to quit---
#8  0x00007ffff626dc53 in start_thread (arg=0x7ffeaf1f1700) at pthread_create.c:308
        __res = <optimized out>
        pd = 0x7ffeaf1f1700
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140731836471040, 5778023194641874489, 1, 140731836471744, 140731836471040, 20, -5777283494420365767, 
                -5778042220252578247}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
        pagesize_m1 = <optimized out>
        sp = <optimized out>
        freesize = <optimized out>
#9  0x00007ffff2949ecd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
No locals.



Expected results:
Can migrate, and no core dumped

Additional info:

Comment 2 jason wang 2013-04-26 08:33:17 UTC
The issue were introduced by commit 85cf2a8d virtio: move vmstate change tracking to core. Who enables vhost_net during vmstate change handler.

BTW.

this could be easily reproduced through
set_link hn0 off
savevm t1

Comment 3 jason wang 2013-04-26 10:01:11 UTC
(In reply to comment #2)
> The issue were introduced by commit 85cf2a8d virtio: move vmstate change
> tracking to core. Who enables vhost_net during vmstate change handler.
> 
> BTW.
> 
> this could be easily reproduced through
> set_link hn0 off
> savevm t1

Speak too fast, the issue is virtio_vhost_net_status() does not check the vhost status properly.

Could you please try to see if RHEL6 has the same issue?

Comment 4 Qian Guo 2013-04-27 03:16:38 UTC
(In reply to comment #3)
> (In reply to comment #2)
> > The issue were introduced by commit 85cf2a8d virtio: move vmstate change
> > tracking to core. Who enables vhost_net during vmstate change handler.
> > 
> > BTW.
> > 
> > this could be easily reproduced through
> > set_link hn0 off
> > savevm t1
> 
> Speak too fast, the issue is virtio_vhost_net_status() does not check the
> vhost status properly.
> 
> Could you please try to see if RHEL6 has the same issue?
Hi, Jason

Hit same issue in rhel6.4 host w/ qemu-kvm-0.12.1.2-2.361.el6.x86_64, so should I file a RHEl6 bug to against this?

Thank you!

Qian Guo

Comment 5 jason wang 2013-04-27 06:13:16 UTC
(In reply to comment #4)
> (In reply to comment #3)
> > (In reply to comment #2)
> > > The issue were introduced by commit 85cf2a8d virtio: move vmstate change
> > > tracking to core. Who enables vhost_net during vmstate change handler.
> > > 
> > > BTW.
> > > 
> > > this could be easily reproduced through
> > > set_link hn0 off
> > > savevm t1
> > 
> > Speak too fast, the issue is virtio_vhost_net_status() does not check the
> > vhost status properly.
> > 
> > Could you please try to see if RHEL6 has the same issue?
> Hi, Jason
> 
> Hit same issue in rhel6.4 host w/ qemu-kvm-0.12.1.2-2.361.el6.x86_64, so
> should I file a RHEl6 bug to against this?
> 
> Thank you!
> 
> Qian Guo

Please do it.

Thanks

Comment 7 Jun Li 2013-12-31 05:50:27 UTC
Reproduce this bug:

Version-Release number of selected component (if applicable):
qemu-kvm-1.4.0-4.el7.x86_64
Steps as comment 0.
<cli>:
src:
# gdb --args /usr/libexec/qemu-kvm -cpu host -m 4G -smp 8,sockets=1,cores=8,threads=1 -M pc -enable-kvm -name win2012 -drive file=/home/RHEL-Server-7.0-64.qcow2_v3,if=none,format=qcow2,werror=stop,rerror=stop,media=disk,id=drive-scsi0-disk0 -device virtio-scsi-pci,id=scsi0,addr=0x4 -device scsi-hd,scsi-id=0,lun=0,bus=scsi0.0,drive=drive-scsi0-disk0,id=virtio-disk0 -nodefaults -nodefconfig -monitor stdio -netdev tap,id=bd,script=/etc/qemu-ifup,vhost=on,ifname=qiguo1 -device virtio-net-pci,netdev=bd,mac=54:52:1a:46:0b:02,id=vnic1 -vnc :20 -vga std -boot menu=on -device virtio-balloon-pci,id=balloon1
---
dst:
# gdb --args /usr/libexec/qemu-kvm -cpu host -m 4G -smp 8,sockets=1,cores=8,threads=1 -M pc -enable-kvm -name win2012 -drive file=/home/RHEL-Server-7.0-64.qcow2_v3,if=none,format=qcow2,werror=stop,rerror=stop,media=disk,id=drive-scsi0-disk0 -device virtio-scsi-pci,id=scsi0,addr=0x4 -device scsi-hd,scsi-id=0,lun=0,bus=scsi0.0,drive=drive-scsi0-disk0,id=virtio-disk0 -nodefaults -nodefconfig -monitor stdio -netdev tap,id=bd,script=/etc/qemu-ifup,vhost=on,ifname=qiguo2 -device virtio-net-pci,netdev=bd,mac=54:52:1a:46:0b:02,id=vnic1 -vnc :21 -vga std -boot menu=on -device virtio-balloon-pci,id=balloon1 -incoming tcp::5800
---
After step 3, Actual result:
(qemu) qemu-kvm: /builddir/build/BUILD/qemu-1.4.0/hw/virtio-net.c:1125: virtio_net_save: Assertion `!n->vhost_started' failed.

Program received signal SIGABRT, Aborted.
[Switching to Thread 0x7ffebe814700 (LWP 14917)]
0x00007ffff3b35979 in raise () from /lib64/libc.so.6
(gdb) bt
#0  0x00007ffff3b35979 in raise () from /lib64/libc.so.6
#1  0x00007ffff3b37088 in abort () from /lib64/libc.so.6
#2  0x00007ffff3b2e8e6 in __assert_fail_base () from /lib64/libc.so.6
#3  0x00007ffff3b2e992 in __assert_fail () from /lib64/libc.so.6
#4  0x000055555575ca6c in virtio_net_save ()
#5  0x000055555577a164 in qemu_savevm_state_complete ()
#6  0x00005555556ad20e in buffered_file_thread ()
#7  0x00007ffff625dde3 in start_thread () from /lib64/libpthread.so.0
#8  0x00007ffff3bf626d in clone () from /lib64/libc.so.6
------------------
Verify this bug:

Version-Release number of selected component (if applicable):
qemu-kvm-1.5.3-30.el7.x86_64
Steps as comment 0.
<cli>
src:
# gdb --args /usr/libexec/qemu-kvm -cpu host -m 4G -smp 8,sockets=1,cores=8,threads=1 -M pc -enable-kvm -name win2012 -drive file=/home/RHEL-Server-7.0-64.qcow2_v3,if=none,format=qcow2,werror=stop,rerror=stop,media=disk,id=drive-scsi0-disk0 -device virtio-scsi-pci,id=scsi0,addr=0x4 -device scsi-hd,scsi-id=0,lun=0,bus=scsi0.0,drive=drive-scsi0-disk0,id=virtio-disk0 -nodefaults -nodefconfig -monitor stdio -netdev tap,id=bd,script=/etc/qemu-ifup,vhost=on,ifname=qiguo1 -device virtio-net-pci,netdev=bd,mac=54:52:1a:46:0b:02,id=vnic1 -vnc :20 -vga std -boot menu=on -device virtio-balloon-pci,id=balloon1
---
dst:
# gdb --args /usr/libexec/qemu-kvm -cpu host -m 4G -smp 8,sockets=1,cores=8,threads=1 -M pc -enable-kvm -name win2012 -drive file=/home/RHEL-Server-7.0-64.qcow2_v3,if=none,format=qcow2,werror=stop,rerror=stop,media=disk,id=drive-scsi0-disk0 -device virtio-scsi-pci,id=scsi0,addr=0x4 -device scsi-hd,scsi-id=0,lun=0,bus=scsi0.0,drive=drive-scsi0-disk0,id=virtio-disk0 -nodefaults -nodefconfig -monitor stdio -netdev tap,id=bd,script=/etc/qemu-ifup,vhost=on,ifname=qiguo2 -device virtio-net-pci,netdev=bd,mac=54:52:1a:46:0b:02,id=vnic1 -vnc :21 -vga std -boot menu=on -device virtio-balloon-pci,id=balloon1 -incoming tcp::5800
---
After step 3, migration finished, guest works well. 
When set_link bd on, guest network can works well.
----
Based on above test, this bug has been verified.

Comment 10 Ludek Smid 2014-06-13 11:34:43 UTC
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.