Bug 852383 - libvirtd dead when start a domain with openvswitch interface
libvirtd dead when start a domain with openvswitch interface
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt (Show other bugs)
6.4
Unspecified Unspecified
high Severity high
: rc
: ---
Assigned To: Alex Jia
Virtualization Bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-08-28 06:38 EDT by yanbing du
Modified: 2013-02-21 02:22 EST (History)
7 users (show)

See Also:
Fixed In Version: libvirt-0.10.1-1.el6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-02-21 02:22:24 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
libvirtd log (64.12 KB, text/plain)
2012-08-28 06:38 EDT, yanbing du
no flags Details

  None (edit)
Description yanbing du 2012-08-28 06:38:46 EDT
Created attachment 607472 [details]
libvirtd log

Description of problem:
Config a domain with an openvswitch type interface, the start it, libvirtd will dead.

Version-Release number of selected component (if applicable):
libvirt-0.10.0-0rc1.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Edit the interface of an exists domain, like:
 ...
 <interface type='bridge'>
  <mac address='52:54:00:71:b1:b6'/>
  <source bridge='ovsbr'/>
  <virtualport type='openvswitch'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
 </interface>

2. Start the domain
# virsh start test
error: Failed to start domain test
error: End of file while reading data: Input/output error
error: Failed to reconnect to the hypervisor

# service libvirtd status
libvirtd dead but pid file exists

  
Actual results:
libvirtd dead

Expected results:
libvirtd will not die

Additional info:
In fact, it's unnecessary to create an openvswitch bridge. Such as:
#brctl addbr br0
#ifconfig br0 up
then use 'br0', can reproduce this bug.
Comment 2 Alex Jia 2012-08-28 07:32:36 EDT
Patch on upstream and wait for review:
https://www.redhat.com/archives/libvir-list/2012-August/msg01779.html


Gdb debug trace:

Program received signal SIGSEGV, Segmentation fault.
0x0000003cc5e81bf3 in virNetDevOpenvswitchAddPort (brname=0x7fdc04008be0 "virbr0", ifname=0x7fdc04008b10 "vnet0", macaddr=<value optimized out>, vmuuid=<value optimized out>, ovsport=0x7fdc10154080,
    virtVlan=0x7fdc10154060) at util/virnetdevopenvswitch.c:103
103                 virBufferAsprintf(buf, "tag=%d", virtVlan->tag[0]);
(gdb) bt
#0  0x0000003cc5e81bf3 in virNetDevOpenvswitchAddPort (brname=0x7fdc04008be0 "virbr0", ifname=0x7fdc04008b10 "vnet0", macaddr=<value optimized out>, vmuuid=<value optimized out>, ovsport=0x7fdc10154080,
    virtVlan=0x7fdc10154060) at util/virnetdevopenvswitch.c:103
#1  0x0000003cc5e8242c in virNetDevTapCreateInBridgePort (brname=0x7fdc04008be0 "virbr0", ifname=0x7fdc10153ff8, macaddr=0x7fdc10153f74,
    vmuuid=0x7fdc101535b8 "\257W\336\263\230\243l\267\266\021\326;\315\342\327\325\020<\025\020\334\177", tapfd=0x7fdc1d736d14, virtPortProfile=0x7fdc10154080, virtVlan=0x7fdc10154060, flags=3)
    at util/virnetdevtap.c:327
#2  0x00007fdc163d201f in qemuNetworkIfaceConnect (def=0x7fdc101535b0, conn=0x7fdc1006d150, driver=0x7fdc10012000, net=0x7fdc10153f70, qemuCaps=<value optimized out>) at qemu/qemu_command.c:258
#3  0x00007fdc163dd6cf in qemuBuildCommandLine (conn=0x7fdc1006d150, driver=0x7fdc10012000, def=0x7fdc101535b0, monitor_chr=0x7fdc1d7370a8, monitor_json=8, qemuCaps=0x7fdc04000b80, migrateFrom=0x0,
    migrateFd=-1, snapshot=0x0, vmop=VIR_NETDEV_VPORT_PROFILE_OP_CREATE) at qemu/qemu_command.c:5292
#4  0x00007fdc163f9e99 in qemuProcessStart (conn=0x7fdc1006d150, driver=0x7fdc10012000, vm=0x7fdc101594b0, migrateFrom=0x0, stdin_fd=-1, stdin_path=0x0, snapshot=0x0,
    vmop=VIR_NETDEV_VPORT_PROFILE_OP_CREATE, flags=1) at qemu/qemu_process.c:3643
#5  0x00007fdc1643fd2e in qemuDomainObjStart (conn=0x7fdc1006d150, driver=0x7fdc10012000, vm=0x7fdc101594b0, flags=<value optimized out>) at qemu/qemu_driver.c:5352
#6  0x00007fdc16440362 in qemuDomainStartWithFlags (dom=0x7fdc04009c90, flags=0) at qemu/qemu_driver.c:5409
#7  0x0000003cc5f01a00 in virDomainCreate (domain=0x7fdc04009c90) at libvirt.c:8165
#8  0x0000000000429082 in remoteDispatchDomainCreate (server=<value optimized out>, client=<value optimized out>, msg=<value optimized out>, rerr=0x7fdc1d737bc0, args=<value optimized out>,
    ret=<value optimized out>) at remote_dispatch.h:874
#9  remoteDispatchDomainCreateHelper (server=<value optimized out>, client=<value optimized out>, msg=<value optimized out>, rerr=0x7fdc1d737bc0, args=<value optimized out>, ret=<value optimized out>)
    at remote_dispatch.h:852
#10 0x0000003cc5f55f7d in virNetServerProgramDispatchCall (prog=0x243c440, server=0x2432fb0, client=0x24415f0, msg=0x2434440) at rpc/virnetserverprogram.c:424
#11 virNetServerProgramDispatch (prog=0x243c440, server=0x2432fb0, client=0x24415f0, msg=0x2434440) at rpc/virnetserverprogram.c:297
#12 0x0000003cc5f524ce in virNetServerProcessMsg (srv=<value optimized out>, client=0x24415f0, prog=<value optimized out>, msg=0x2434440) at rpc/virnetserver.c:170
#13 0x0000003cc5f52c5c in virNetServerHandleJob (jobOpaque=<value optimized out>, opaque=0x2432fb0) at rpc/virnetserver.c:191
#14 0x0000003cc5e6974e in virThreadPoolWorker (opaque=<value optimized out>) at util/threadpool.c:144
#15 0x0000003cc5e68d36 in virThreadHelper (data=<value optimized out>) at util/threads-pthread.c:161
#16 0x000000312ce07851 in start_thread () from /lib64/libpthread.so.0
#17 0x000000312cae767d in clone () from /lib64/libc.so.6
Comment 3 Alex Jia 2012-08-29 04:53:45 EDT
In POST:

commit 83b85e3e8fa2f2fdc86585787d0db617fe81c710
Author: Alex Jia <ajia@redhat.com>
Date:   Wed Aug 29 10:56:04 2012 +0800

    util: Prevent libvirtd crash from virNetDevOpenvswitchAddPort()
    
    * src/util/virnetdevopenvswitch.c (virNetDevOpenvswitchAddPort): avoid libvirtd
    crash due to derefing a NULL virtVlan->tag.
Comment 4 Laine Stump 2012-08-29 07:07:27 EDT
This crash is an indicator of a wider problem - that the vlan settings from <network>s and <portgroup>s are being ignored for Open vSwitch interfaces.

If you apply this patch, the vlan pointer sent to virNetDevOpenvswitchAddPort will always be NULL if there are  no tags (eliminating the crash), and it will fix the problem of ignoring networks and portgroups:

  https://www.redhat.com/archives/libvir-list/2012-August/msg01835.html

(the patch that Alex has already pushed is harmless, though, and by itself does fix the crash.)
Comment 5 Alex Jia 2012-08-29 07:19:09 EDT
(In reply to comment #4)
> This crash is an indicator of a wider problem - that the vlan settings from
> <network>s and <portgroup>s are being ignored for Open vSwitch interfaces.
> 
> If you apply this patch, the vlan pointer sent to
> virNetDevOpenvswitchAddPort will always be NULL if there are  no tags
> (eliminating the crash), and it will fix the problem of ignoring networks
> and portgroups:
> 
>   https://www.redhat.com/archives/libvir-list/2012-August/msg01835.html
> 
> (the patch that Alex has already pushed is harmless, though, and by itself
> does fix the crash.)

Laine, I just waive my patch then applied your patch, unfornately, the libvirtd still is crash.

Program received signal SIGSEGV, Segmentation fault.
virBufferContentAndReset (buf=0xffffffff) at util/buf.c:226
226	    if (buf->error) {
(gdb) bt
#0  virBufferContentAndReset (buf=0xffffffff) at util/buf.c:226
#1  0x00007f6ebd21bb7d in virNetDevOpenvswitchAddPort (brname=0x7f6ea801ecc0 "virbr0", ifname=0x7f6ea801ebf0 "vnet0", macaddr=<value optimized out>, vmuuid=<value optimized out>, 
    ovsport=<value optimized out>, virtVlan=0x0) at util/virnetdevopenvswitch.c:109
#2  0x00007f6ebd21c4ac in virNetDevTapCreateInBridgePort (brname=0x7f6ea801ecc0 "virbr0", ifname=0x7f6ea8153928, macaddr=0x7f6ea81538a4, 
    vmuuid=0x7f6ea80666b8 "\257W\336\263\230\243l\267\266\021\326;\315\342\327\325\340\063\025\250n\177", tapfd=0x7f6eb72edd14, virtPortProfile=0x7f6ea81539b0, virtVlan=0x0, flags=3)
    at util/virnetdevtap.c:327
#3  0x00007f6eaff88068 in qemuNetworkIfaceConnect (def=0x7f6ea80666b0, conn=0x7f6ea4000bd0, driver=0x7f6ea8070bf0, net=0x7f6ea81538a0, qemuCaps=<value optimized out>) at qemu/qemu_command.c:258
#4  0x00007f6eaff9370f in qemuBuildCommandLine (conn=0x7f6ea4000bd0, driver=0x7f6ea8070bf0, def=0x7f6ea80666b0, monitor_chr=0x7f6eb72ee0a8, monitor_json=8, qemuCaps=0x7f6ea801f480, migrateFrom=0x0, 
    migrateFd=-1, snapshot=0x0, vmop=VIR_NETDEV_VPORT_PROFILE_OP_CREATE) at qemu/qemu_command.c:5292
#5  0x00007f6eaffafeb9 in qemuProcessStart (conn=0x7f6ea4000bd0, driver=0x7f6ea8070bf0, vm=0x7f6ea8159010, migrateFrom=0x0, stdin_fd=-1, stdin_path=0x0, snapshot=0x0, 
    vmop=VIR_NETDEV_VPORT_PROFILE_OP_CREATE, flags=1) at qemu/qemu_process.c:3643
#6  0x00007f6eafff5e2e in qemuDomainObjStart (conn=0x7f6ea4000bd0, driver=0x7f6ea8070bf0, vm=0x7f6ea8159010, flags=<value optimized out>) at qemu/qemu_driver.c:5354
#7  0x00007f6eafff6462 in qemuDomainStartWithFlags (dom=0x7f6ea80ccf80, flags=0) at qemu/qemu_driver.c:5411
#8  0x00007f6ebd29ba80 in virDomainCreate (domain=0x7f6ea80ccf80) at libvirt.c:8165
#9  0x0000000000429082 in remoteDispatchDomainCreate (server=<value optimized out>, client=<value optimized out>, msg=<value optimized out>, rerr=0x7f6eb72eebc0, args=<value optimized out>, 
    ret=<value optimized out>) at remote_dispatch.h:874
#10 remoteDispatchDomainCreateHelper (server=<value optimized out>, client=<value optimized out>, msg=<value optimized out>, rerr=0x7f6eb72eebc0, args=<value optimized out>, ret=<value optimized out>)
    at remote_dispatch.h:852
#11 0x00007f6ebd2f000d in virNetServerProgramDispatchCall (prog=0x1605330, server=0x15fbe60, client=0x160a600, msg=0x16143b0) at rpc/virnetserverprogram.c:424
#12 virNetServerProgramDispatch (prog=0x1605330, server=0x15fbe60, client=0x160a600, msg=0x16143b0) at rpc/virnetserverprogram.c:297
#13 0x00007f6ebd2ec55e in virNetServerProcessMsg (srv=<value optimized out>, client=0x160a600, prog=<value optimized out>, msg=0x16143b0) at rpc/virnetserver.c:170
#14 0x00007f6ebd2eccec in virNetServerHandleJob (jobOpaque=<value optimized out>, opaque=<value optimized out>) at rpc/virnetserver.c:191
#15 0x00007f6ebd20374e in virThreadPoolWorker (opaque=<value optimized out>) at util/threadpool.c:144
#16 0x00007f6ebd202d36 in virThreadHelper (data=<value optimized out>) at util/threads-pthread.c:161
#17 0x000000312ce07851 in start_thread () from /lib64/libpthread.so.0
#18 0x000000312cae767d in clone () from /lib64/libc.so.6
Comment 6 Alex Jia 2012-08-30 03:15:18 EDT
(In reply to comment #5)
> (In reply to comment #4)
> > This crash is an indicator of a wider problem - that the vlan settings from
> > <network>s and <portgroup>s are being ignored for Open vSwitch interfaces.
> > 
> > If you apply this patch, the vlan pointer sent to
> > virNetDevOpenvswitchAddPort will always be NULL if there are  no tags
> > (eliminating the crash), and it will fix the problem of ignoring networks
> > and portgroups:
> > 
> >   https://www.redhat.com/archives/libvir-list/2012-August/msg01835.html
> > 
> > (the patch that Alex has already pushed is harmless, though, and by itself
> > does fix the crash.)
> 
> Laine, I just waive my patch then applied your patch, unfornately, the
> libvirtd still is crash.


This issue has been fixed by Kyle's patch:

commit 5e465df6be8bcb00f0b4bff831e91f4042fae272
Author: Kyle Mestery <kmestery@cisco.com>
Date:   Wed Aug 29 14:44:36 2012 -0400

    Fix a crash when using Open vSwitch virtual ports
    
    Fixup buffer usage when handling VLANs. Also fix the logic
    used to determine if the virNetDevVlanPtr is valid or not.
    Fixes crashes in the latest code when using Open vSwitch
    virtualports.
    
    Signed-off-by: Kyle Mestery <kmestery@cisco.com>
Comment 8 yanbing du 2012-09-03 04:18:20 EDT
Verify with libvirt-0.10.1-1.el6.x86_64.
Define a guest with openvswith interface, libvirtd will not crash.
Comment 9 errata-xmlrpc 2013-02-21 02:22:24 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-0276.html

Note You need to log in before you can comment on or make changes to this bug.