RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 910422 - [libvirt] Libvirt crashes with segmentation fault during creation of VM
Summary: [libvirt] Libvirt crashes with segmentation fault during creation of VM
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.4
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Osier Yang
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-02-12 15:49 UTC by Gadi Ickowicz
Modified: 2014-08-22 01:41 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-03-14 11:43:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
vdsm + libvirt logs + core dump (2.38 MB, application/x-gzip)
2013-02-12 15:49 UTC, Gadi Ickowicz
no flags Details

Description Gadi Ickowicz 2013-02-12 15:49:53 UTC
Created attachment 696561 [details]
vdsm + libvirt logs + core dump

Description of problem:
Attempting to run the vm with the xml attached (from vdsm) resulted in libvirt crashing with a segmentation fault. the core dump file is attached. After restarting libvirt running the same xml works.

Not sure how to reproduce, was running similar scenarios many times using automated tests.

Version-Release number of selected component (if applicable):
libvirt-0.10.2-18.el6.x86_64

How reproducible:
?


Additional info:
vdsm + libvirt logs and libvirt core dump attached in logs.tar.gz

vm xml:
<domain type="kvm">
        <name>short_agent-3.1_rhel6.x_jenkins-x86_64VM</name>
        <uuid>bcc8405a-bfc8-4019-88ca-fa24940b6210</uuid>
        <memory>2097152</memory>
        <currentMemory>2097152</currentMemory>
        <vcpu>1</vcpu>
        <devices>
                <channel type="unix">
                        <target name="com.redhat.rhevm.vdsm" type="virtio"/>
                        <source mode="bind" path="/var/lib/libvirt/qemu/channels/short_agent-3.1_rhel6.x_jenkins-x86_64VM.com.redhat.rhevm.vdsm"/>
                </channel>
                <channel type="unix">
                        <target name="org.qemu.guest_agent.0" type="virtio"/>
                        <source mode="bind" path="/var/lib/libvirt/qemu/channels/short_agent-3.1_rhel6.x_jenkins-x86_64VM.org.qemu.guest_agent.0"/>
                </channel>
                <input bus="ps2" type="mouse"/>
                <channel type="spicevmc">
                        <target name="com.redhat.spice.0" type="virtio"/>
                </channel>
                <graphics autoport="yes" keymap="en-us" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1" type="spice">
                        <channel mode="secure" name="main"/>
                        <channel mode="secure" name="inputs"/>
                        <channel mode="secure" name="cursor"/>
                        <channel mode="secure" name="playback"/>
                        <channel mode="secure" name="record"/>
                        <channel mode="secure" name="display"/>
                        <channel mode="secure" name="usbredir"/>
                        <channel mode="secure" name="smartcard"/>
                        <listen network="vdsm-rhevm" type="network"/>
                </graphics>
                <sound model="ich6"/>
                <video>
                        <model heads="1" type="qxl" vram="65536"/>
                </video>
                <interface type="bridge">
                        <mac address="00:1a:4a:16:81:03"/>
                        <model type="virtio"/>
                        <source bridge="rhevm"/>
                        <filterref filter="vdsm-no-mac-spoofing"/>
                        <link state="up"/>
                        <boot order="2"/>
                </interface>
                <memballoon model="virtio"/>
                <disk device="cdrom" snapshot="no" type="file">
                        <source file="" startupPolicy="optional"/>
                        <target bus="ide" dev="hdc"/>
                        <readonly/>
                        <serial></serial>
                </disk>
                <disk device="disk" snapshot="no" type="block">
                        <source dev="/rhev/data-center/50760e2d-1e7c-4bf7-882d-b206a8e25854/3054c7b6-ae89-47ae-8f3d-ff84b0293d13/images/90410884-9926-4039-bdf5-637c136eb108/04351391-c57b-456b-81d7-559b74eed332"/>
                        <target bus="virtio" dev="vda"/>
                        <serial>90410884-9926-4039-bdf5-637c136eb108</serial>
                        <boot order="1"/>
                        <driver cache="none" error_policy="stop" io="native" name="qemu" type="qcow2"/>
                </disk>
        </devices>
        <os>
                <type arch="x86_64" machine="rhel6.4.0">hvm</type>
                <smbios mode="sysinfo"/>
        </os>
        <sysinfo type="smbios">
                <system>
                        <entry name="manufacturer">Red Hat</entry>
                        <entry name="product">RHEV Hypervisor</entry>
                        <entry name="version">6Server-6.4.0.4.el6</entry>
                        <entry name="serial">4C4C4544-004A-4410-804C-B5C04F39354A</entry>
                        <entry name="uuid">bcc8405a-bfc8-4019-88ca-fa24940b6210</entry>
                </system>
        </sysinfo>
        <clock adjustment="0" offset="variable">
                <timer name="rtc" tickpolicy="catchup"/>
        </clock>
        <features>
                <acpi/>
        </features>
        <cpu match="exact">
                <model>Opteron_G3</model>
                <topology cores="1" sockets="1" threads="1"/>
        </cpu>
</domain>

Comment 2 RHEL Program Management 2013-02-16 06:47:35 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 3 Huang Wenlong 2013-02-18 07:07:19 UTC
Hi, Gadi 
I can not reproduce this bug ,
I start a guest with libvirt in the rhevh env and libvirtd did not crash 
Can you provide some details for reproducing this or the key reason for the crash

libvirt-0.10.2-18.el6.x86_64
vdsm-4.10.2-1.5.el6.x86_64

# virsh list 
Please enter your authentication name: test
Please enter your password: 
 Id    Name                           State
----------------------------------------------------
 4     whuang                         running

[root@intel-w3520-12-1 ~]# virsh dumpxml whuang
Please enter your authentication name: test
Please enter your password: 
<domain type='kvm' id='4'>
  <name>whuang</name>
  <uuid>38753d0a-b2c0-b2d8-4288-74967a92c312</uuid>
  <memory unit='KiB'>524288</memory>
  <currentMemory unit='KiB'>524288</currentMemory>
  <vcpu placement='static'>1</vcpu>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>Red Hat</entry>
      <entry name='product'>RHEV Hypervisor</entry>
      <entry name='version'>6Server-6.4.0.4.el6</entry>
      <entry name='serial'>2EF54700-8D9A-11DE-9C5C-9B3802226DA5_00:24:7e:70:04:52</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='rhel6.4.0'>hvm</type>
    <boot dev='hd'/>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
  </features>
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>Conroe</model>
    <topology sockets='1' cores='1' threads='1'/>
  </cpu>
  <clock offset='variable' adjustment='-43200' basis='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source startupPolicy='optional'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <serial></serial>
      <alias name='ide0-1-0'/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>
    <disk type='block' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='native'/>
      <source dev='/rhev/data-center/f3c552ec-4759-4ba4-823c-2de3e0510cfc/66713457-a9fc-4cd5-9311-e316e9bbfdfd/images/30009ab8-01e7-48ad-964f-f982edeea355/54aac95a-df57-49d2-8be5-8aca13725cf6'/>
      <target dev='vda' bus='virtio'/>
      <serial>30009ab8-01e7-48ad-964f-f982edeea355</serial>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <controller type='usb' index='0'>
      <alias name='usb0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='00:1a:4a:a8:7a:8a'/>
      <source bridge='rhevm'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <filterref filter='vdsm-no-mac-spoofing'/>
      <link state='up'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/whuang.com.redhat.rhevm.vdsm'/>
      <target type='virtio' name='com.redhat.rhevm.vdsm'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/whuang.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <alias name='channel1'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0'/>
      <alias name='channel2'/>
      <address type='virtio-serial' controller='0' bus='0' port='3'/>
    </channel>
    <input type='mouse' bus='ps2'/>
    <graphics type='spice' port='5900' tlsPort='5901' autoport='yes' listen='0' keymap='en-us'>
      <listen type='address' address='0'/>
      <channel name='main' mode='secure'/>
      <channel name='display' mode='secure'/>
      <channel name='inputs' mode='secure'/>
      <channel name='cursor' mode='secure'/>
      <channel name='playback' mode='secure'/>
      <channel name='record' mode='secure'/>
      <channel name='smartcard' mode='secure'/>
      <channel name='usbredir' mode='secure'/>
    </graphics>
    <sound model='ich6'>
      <alias name='sound0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </sound>
    <video>
      <model type='qxl' vram='65536' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c496,c753</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c496,c753</imagelabel>
  </seclabel>
</domain>



Wenlong

Comment 4 Gadi Ickowicz 2013-02-18 09:51:33 UTC
(In reply to comment #3)

Hi,

Unfortunately I was not able to reproduce this bug either, but the scenario I was running was simply creating a vm, plugging in a disk and then removing the vm. I ran this scenario many times (using automated test) and one of the times it failed with that core dump.

No other clues as to how reproduce at the moment.

> Hi, Gadi 
> I can not reproduce this bug ,
> I start a guest with libvirt in the rhevh env and libvirtd did not crash 
> Can you provide some details for reproducing this or the key reason for the
> crash
> 
> libvirt-0.10.2-18.el6.x86_64
> vdsm-4.10.2-1.5.el6.x86_64
> 
> # virsh list 
> Please enter your authentication name: test
> Please enter your password: 
>  Id    Name                           State
> ----------------------------------------------------
>  4     whuang                         running
> 
> [root@intel-w3520-12-1 ~]# virsh dumpxml whuang
> Please enter your authentication name: test
> Please enter your password: 
> <domain type='kvm' id='4'>
>   <name>whuang</name>
>   <uuid>38753d0a-b2c0-b2d8-4288-74967a92c312</uuid>
>   <memory unit='KiB'>524288</memory>
>   <currentMemory unit='KiB'>524288</currentMemory>
>   <vcpu placement='static'>1</vcpu>
>   <sysinfo type='smbios'>
>     <system>
>       <entry name='manufacturer'>Red Hat</entry>
>       <entry name='product'>RHEV Hypervisor</entry>
>       <entry name='version'>6Server-6.4.0.4.el6</entry>
>       <entry
> name='serial'>2EF54700-8D9A-11DE-9C5C-9B3802226DA5_00:24:7e:70:04:52</entry>
>     </system>
>   </sysinfo>
>   <os>
>     <type arch='x86_64' machine='rhel6.4.0'>hvm</type>
>     <boot dev='hd'/>
>     <smbios mode='sysinfo'/>
>   </os>
>   <features>
>     <acpi/>
>   </features>
>   <cpu mode='custom' match='exact'>
>     <model fallback='allow'>Conroe</model>
>     <topology sockets='1' cores='1' threads='1'/>
>   </cpu>
>   <clock offset='variable' adjustment='-43200' basis='utc'>
>     <timer name='rtc' tickpolicy='catchup'/>
>   </clock>
>   <on_poweroff>destroy</on_poweroff>
>   <on_reboot>restart</on_reboot>
>   <on_crash>destroy</on_crash>
>   <devices>
>     <emulator>/usr/libexec/qemu-kvm</emulator>
>     <disk type='file' device='cdrom'>
>       <driver name='qemu' type='raw'/>
>       <source startupPolicy='optional'/>
>       <target dev='hdc' bus='ide'/>
>       <readonly/>
>       <serial></serial>
>       <alias name='ide0-1-0'/>
>       <address type='drive' controller='0' bus='1' target='0' unit='0'/>
>     </disk>
>     <disk type='block' device='disk' snapshot='no'>
>       <driver name='qemu' type='qcow2' cache='none' error_policy='stop'
> io='native'/>
>       <source
> dev='/rhev/data-center/f3c552ec-4759-4ba4-823c-2de3e0510cfc/66713457-a9fc-
> 4cd5-9311-e316e9bbfdfd/images/30009ab8-01e7-48ad-964f-f982edeea355/54aac95a-
> df57-49d2-8be5-8aca13725cf6'/>
>       <target dev='vda' bus='virtio'/>
>       <serial>30009ab8-01e7-48ad-964f-f982edeea355</serial>
>       <alias name='virtio-disk0'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x05'
> function='0x0'/>
>     </disk>
>     <controller type='usb' index='0'>
>       <alias name='usb0'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x01'
> function='0x2'/>
>     </controller>
>     <controller type='ide' index='0'>
>       <alias name='ide0'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x01'
> function='0x1'/>
>     </controller>
>     <controller type='virtio-serial' index='0'>
>       <alias name='virtio-serial0'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x04'
> function='0x0'/>
>     </controller>
>     <interface type='bridge'>
>       <mac address='00:1a:4a:a8:7a:8a'/>
>       <source bridge='rhevm'/>
>       <target dev='vnet0'/>
>       <model type='virtio'/>
>       <filterref filter='vdsm-no-mac-spoofing'/>
>       <link state='up'/>
>       <alias name='net0'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x03'
> function='0x0'/>
>     </interface>
>     <channel type='unix'>
>       <source mode='bind'
> path='/var/lib/libvirt/qemu/channels/whuang.com.redhat.rhevm.vdsm'/>
>       <target type='virtio' name='com.redhat.rhevm.vdsm'/>
>       <alias name='channel0'/>
>       <address type='virtio-serial' controller='0' bus='0' port='1'/>
>     </channel>
>     <channel type='unix'>
>       <source mode='bind'
> path='/var/lib/libvirt/qemu/channels/whuang.org.qemu.guest_agent.0'/>
>       <target type='virtio' name='org.qemu.guest_agent.0'/>
>       <alias name='channel1'/>
>       <address type='virtio-serial' controller='0' bus='0' port='2'/>
>     </channel>
>     <channel type='spicevmc'>
>       <target type='virtio' name='com.redhat.spice.0'/>
>       <alias name='channel2'/>
>       <address type='virtio-serial' controller='0' bus='0' port='3'/>
>     </channel>
>     <input type='mouse' bus='ps2'/>
>     <graphics type='spice' port='5900' tlsPort='5901' autoport='yes'
> listen='0' keymap='en-us'>
>       <listen type='address' address='0'/>
>       <channel name='main' mode='secure'/>
>       <channel name='display' mode='secure'/>
>       <channel name='inputs' mode='secure'/>
>       <channel name='cursor' mode='secure'/>
>       <channel name='playback' mode='secure'/>
>       <channel name='record' mode='secure'/>
>       <channel name='smartcard' mode='secure'/>
>       <channel name='usbredir' mode='secure'/>
>     </graphics>
>     <sound model='ich6'>
>       <alias name='sound0'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x07'
> function='0x0'/>
>     </sound>
>     <video>
>       <model type='qxl' vram='65536' heads='1'/>
>       <alias name='video0'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x02'
> function='0x0'/>
>     </video>
>     <memballoon model='virtio'>
>       <alias name='balloon0'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x06'
> function='0x0'/>
>     </memballoon>
>   </devices>
>   <seclabel type='dynamic' model='selinux' relabel='yes'>
>     <label>system_u:system_r:svirt_t:s0:c496,c753</label>
>     <imagelabel>system_u:object_r:svirt_image_t:s0:c496,c753</imagelabel>
>   </seclabel>
> </domain>
> 
> 
> 
> Wenlong

Comment 5 Dave Allan 2013-02-18 22:19:48 UTC
(In reply to comment #4)
> Unfortunately I was not able to reproduce this bug either, but the scenario
> I was running was simply creating a vm, plugging in a disk and then removing
> the vm. I ran this scenario many times (using automated test) and one of the
> times it failed with that core dump.

Osier, is this a dup of the hotplug problems Dan reported?

Comment 6 Osier Yang 2013-02-19 13:05:20 UTC
(In reply to comment #5)
> (In reply to comment #4)
> > Unfortunately I was not able to reproduce this bug either, but the scenario
> > I was running was simply creating a vm, plugging in a disk and then removing
> > the vm. I ran this scenario many times (using automated test) and one of the
> > times it failed with that core dump.
> 
> Osier, is this a dup of the hotplug problems Dan reported?

Do you mean 908073? if so, I don't think it's duplicate. Because sgio patches
even not included in yet. But I'm getting the big core file, to get the backstrace.

Comment 7 Osier Yang 2013-02-20 06:24:50 UTC
Reading symbols from /usr/sbin/libvirtd...Reading symbols from /usr/lib/debug/usr/sbin/libvirtd.debug...done.
done.
[New Thread 2754]
[New Thread 2749]
[New Thread 2755]
[New Thread 2756]
[New Thread 2757]
[New Thread 2758]
[New Thread 2750]
[New Thread 2759]
[New Thread 2751]
[New Thread 2753]
[New Thread 2752]
Reading symbols from /lib64/ld-linux-x86-64.so.2...Reading symbols from /usr/lib/debug/lib64/ld-2.12.so.debug...done.
done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
Core was generated by `libvirtd --daemon --listen'.
Program terminated with signal 11, Segmentation fault.
#0  0x0000003700076223 in ?? ()
(gdb) thread apply bt all
(gdb) bt
#0  0x0000003700076223 in ?? ()
#1  0x0000003000000028 in ?? ()
#2  0x00007fd6ebcc57b0 in ?? ()
#3  0x00007fd6ebcc56f0 in ?? ()
#4  0x00000000ebcc57c0 in ?? ()
#5  0x0000003000000028 in ?? ()
#6  0x00007fd6ebcc57d0 in ?? ()
#7  0x00007fd6ebcc5710 in ?? ()
#8  0x00007fd6c0000020 in ?? ()
#9  0x000000000000044c in ?? ()
#10 0x00007fd6c0000020 in ?? ()
#11 0x0000000000000000 in ?? ()

@Gadi. Can you paste the backtrace instead?

Comment 8 Gadi Ickowicz 2013-02-20 12:53:20 UTC
(In reply to comment #7)
> Reading symbols from /usr/sbin/libvirtd...Reading symbols from
> /usr/lib/debug/usr/sbin/libvirtd.debug...done.
> done.
> [New Thread 2754]
> [New Thread 2749]
> [New Thread 2755]
> [New Thread 2756]
> [New Thread 2757]
> [New Thread 2758]
> [New Thread 2750]
> [New Thread 2759]
> [New Thread 2751]
> [New Thread 2753]
> [New Thread 2752]
> Reading symbols from /lib64/ld-linux-x86-64.so.2...Reading symbols from
> /usr/lib/debug/lib64/ld-2.12.so.debug...done.
> done.
> Loaded symbols for /lib64/ld-linux-x86-64.so.2
> Core was generated by `libvirtd --daemon --listen'.
> Program terminated with signal 11, Segmentation fault.
> #0  0x0000003700076223 in ?? ()
> (gdb) thread apply bt all
> (gdb) bt
> #0  0x0000003700076223 in ?? ()
> #1  0x0000003000000028 in ?? ()
> #2  0x00007fd6ebcc57b0 in ?? ()
> #3  0x00007fd6ebcc56f0 in ?? ()
> #4  0x00000000ebcc57c0 in ?? ()
> #5  0x0000003000000028 in ?? ()
> #6  0x00007fd6ebcc57d0 in ?? ()
> #7  0x00007fd6ebcc5710 in ?? ()
> #8  0x00007fd6c0000020 in ?? ()
> #9  0x000000000000044c in ?? ()
> #10 0x00007fd6c0000020 in ?? ()
> #11 0x0000000000000000 in ?? ()
> 
> @Gadi. Can you paste the backtrace instead?

Other than the files I have already attached I don't have any other files/information to provide.

Comment 9 Osier Yang 2013-03-06 14:59:51 UTC
(In reply to comment #8)
> (In reply to comment #7)
> > Reading symbols from /usr/sbin/libvirtd...Reading symbols from
> > /usr/lib/debug/usr/sbin/libvirtd.debug...done.
> > done.
> > [New Thread 2754]
> > [New Thread 2749]
> > [New Thread 2755]
> > [New Thread 2756]
> > [New Thread 2757]
> > [New Thread 2758]
> > [New Thread 2750]
> > [New Thread 2759]
> > [New Thread 2751]
> > [New Thread 2753]
> > [New Thread 2752]
> > Reading symbols from /lib64/ld-linux-x86-64.so.2...Reading symbols from
> > /usr/lib/debug/lib64/ld-2.12.so.debug...done.
> > done.
> > Loaded symbols for /lib64/ld-linux-x86-64.so.2
> > Core was generated by `libvirtd --daemon --listen'.
> > Program terminated with signal 11, Segmentation fault.
> > #0  0x0000003700076223 in ?? ()
> > (gdb) thread apply bt all
> > (gdb) bt
> > #0  0x0000003700076223 in ?? ()
> > #1  0x0000003000000028 in ?? ()
> > #2  0x00007fd6ebcc57b0 in ?? ()
> > #3  0x00007fd6ebcc56f0 in ?? ()
> > #4  0x00000000ebcc57c0 in ?? ()
> > #5  0x0000003000000028 in ?? ()
> > #6  0x00007fd6ebcc57d0 in ?? ()
> > #7  0x00007fd6ebcc5710 in ?? ()
> > #8  0x00007fd6c0000020 in ?? ()
> > #9  0x000000000000044c in ?? ()
> > #10 0x00007fd6c0000020 in ?? ()
> > #11 0x0000000000000000 in ?? ()
> > 
> > @Gadi. Can you paste the backtrace instead?
> 
> Other than the files I have already attached I don't have any other
> files/information to provide.

But the core file is useless, because I can only get those useless ???. Can
you try to reproduce it and get the backstrace with symbols?

Comment 10 Haim 2013-03-07 08:49:55 UTC
(In reply to comment #9)
> (In reply to comment #8)
> > (In reply to comment #7)
> > > Reading symbols from /usr/sbin/libvirtd...Reading symbols from
> > > /usr/lib/debug/usr/sbin/libvirtd.debug...done.
> > > done.
> > > [New Thread 2754]
> > > [New Thread 2749]
> > > [New Thread 2755]
> > > [New Thread 2756]
> > > [New Thread 2757]
> > > [New Thread 2758]
> > > [New Thread 2750]
> > > [New Thread 2759]
> > > [New Thread 2751]
> > > [New Thread 2753]
> > > [New Thread 2752]
> > > Reading symbols from /lib64/ld-linux-x86-64.so.2...Reading symbols from
> > > /usr/lib/debug/lib64/ld-2.12.so.debug...done.
> > > done.
> > > Loaded symbols for /lib64/ld-linux-x86-64.so.2
> > > Core was generated by `libvirtd --daemon --listen'.
> > > Program terminated with signal 11, Segmentation fault.
> > > #0  0x0000003700076223 in ?? ()
> > > (gdb) thread apply bt all
> > > (gdb) bt
> > > #0  0x0000003700076223 in ?? ()
> > > #1  0x0000003000000028 in ?? ()
> > > #2  0x00007fd6ebcc57b0 in ?? ()
> > > #3  0x00007fd6ebcc56f0 in ?? ()
> > > #4  0x00000000ebcc57c0 in ?? ()
> > > #5  0x0000003000000028 in ?? ()
> > > #6  0x00007fd6ebcc57d0 in ?? ()
> > > #7  0x00007fd6ebcc5710 in ?? ()
> > > #8  0x00007fd6c0000020 in ?? ()
> > > #9  0x000000000000044c in ?? ()
> > > #10 0x00007fd6c0000020 in ?? ()
> > > #11 0x0000000000000000 in ?? ()
> > > 
> > > @Gadi. Can you paste the backtrace instead?
> > 
> > Other than the files I have already attached I don't have any other
> > files/information to provide.
> 
> But the core file is useless, because I can only get those useless ???. Can
> you try to reproduce it and get the backstrace with symbols?

Osier, can you try and get the correct debug info for this libvirt version? we cannot find them, I know that eblake had some progress with this bug, maybe worth asking him

Comment 11 Jakub Libosvar 2013-03-07 08:58:23 UTC
Relevant threads from attached core file.

Thread 2 (Thread 0x7fd6f435c860 (LWP 2749)):
#0  0x000000370008eedd in two_way_short_needle (haystack_start=<value optimized out>, 
    needle_start=<value optimized out>) at ../string/str-two-way.h:269
#1  __strstr_sse2 (haystack_start=<value optimized out>, 
    needle_start=<value optimized out>) at ../string/strstr.c:84
#2  0x00007fd6f54331b4 in virLogFiltersCheck (
    category=0x7fd6f558095b "file.util/event_poll.c", priority=1, 
    funcname=0x7fd6f5580f30 "virEventPollMakePollFDs", linenr=378, flags=0, 
    fmt=0x7fd6f5580c80 "Prepare n=%d w=%d, f=%d e=%d d=%d", vargs=0x7fff24f78290)
    at util/logging.c:520
#3  virLogVMessage (category=0x7fd6f558095b "file.util/event_poll.c", priority=1, 
    funcname=0x7fd6f5580f30 "virEventPollMakePollFDs", linenr=378, flags=0, 
    fmt=0x7fd6f5580c80 "Prepare n=%d w=%d, f=%d e=%d d=%d", vargs=0x7fff24f78290)
    at util/logging.c:709
#4  0x00007fd6f543358c in virLogMessage (category=<value optimized out>, 
    priority=<value optimized out>, funcname=<value optimized out>, 
    linenr=<value optimized out>, flags=<value optimized out>, fmt=<value optimized out>)
    at util/logging.c:670
#5  0x00007fd6f542c742 in virEventPollMakePollFDs () at util/event_poll.c:374
#6  virEventPollRunOnce () at util/event_poll.c:605
#7  0x00007fd6f542bb67 in virEventRunDefaultImpl () at util/event.c:247
#8  0x00007fd6f551b63d in virNetServerRun (srv=0x2097f80) at rpc/virnetserver.c:748
#9  0x00000000004235b7 in main (argc=<value optimized out>, argv=<value optimized out>)
    at libvirtd.c:1228

Thread 1 (Thread 0x7fd6ebcc6700 (LWP 2754)):
#0  0x0000003700076223 in malloc_consolidate (av=0x7fd6c0000020) at malloc.c:5181
#1  0x0000003700079385 in _int_malloc (av=0x7fd6c0000020, bytes=<value optimized out>)
    at malloc.c:4385
#2  0x000000370007a911 in __libc_malloc (bytes=1100) at malloc.c:3664
#3  0x00007fd6f54345ac in virReallocN (ptrptr=0x7fd6ebcc59b0, size=<value optimized out>, 
    count=<value optimized out>) at util/memory.c:160
#4  0x00007fd6f5421a07 in virBufferGrow (buf=0x7fd6ebcc59a0, len=<value optimized out>)
    at util/buf.c:129
#5  0x00007fd6f5421f64 in virBufferVasprintf (buf=0x7fd6ebcc59a0, 
    format=0x7fd6f558ea73 "<domain type='%s'", argptr=0x7fd6ebcc5610) at util/buf.c:322
#6  0x00007fd6f54220b8 in virBufferAsprintf (buf=<value optimized out>, 
    format=<value optimized out>) at util/buf.c:295
#7  0x00007fd6f5471c52 in virDomainDefFormatInternal (def=0x7fd6d0009d30, flags=0, 
    buf=0x7fd6ebcc59a0) at conf/domain_conf.c:13632
#8  0x00000000004719f1 in qemuDomainDefFormatBuf (driver=<value optimized out>, 
    def=0x7fd6d0009d30, flags=0, buf=<value optimized out>) at qemu/qemu_domain.c:1298
#9  0x0000000000471c85 in qemuDomainDefFormatXML (driver=<value optimized out>, 
    def=<value optimized out>, flags=<value optimized out>) at qemu/qemu_domain.c:1317
#10 0x0000000000462011 in qemuDomainGetXMLDesc (dom=<value optimized out>, flags=0)
    at qemu/qemu_driver.c:5363
#11 0x00007fd6f54d1c2e in virDomainGetXMLDesc (domain=0x7fd6c00cf9c0, flags=0)
    at libvirt.c:4371
#12 0x000000000043f62b in remoteDispatchDomainGetXMLDesc (server=<value optimized out>, 
    client=<value optimized out>, msg=<value optimized out>, rerr=0x7fd6ebcc5b80, 
    args=0x7fd6c00d53c0, ret=0x7fd6c00d5270) at remote_dispatch.h:2584
#13 remoteDispatchDomainGetXMLDescHelper (server=<value optimized out>, 
    client=<value optimized out>, msg=<value optimized out>, rerr=0x7fd6ebcc5b80, 
---Type <return> to continue, or q <return> to quit---
    args=0x7fd6c00d53c0, ret=0x7fd6c00d5270) at remote_dispatch.h:2560
#14 0x00007fd6f551b162 in virNetServerProgramDispatchCall (prog=0x20a3610, 
    server=0x2097f80, client=0x20a3df0, msg=0x20aa000) at rpc/virnetserverprogram.c:431
#15 virNetServerProgramDispatch (prog=0x20a3610, server=0x2097f80, client=0x20a3df0, 
    msg=0x20aa000) at rpc/virnetserverprogram.c:304
#16 0x00007fd6f551bdfe in virNetServerProcessMsg (srv=<value optimized out>, 
    client=0x20a3df0, prog=<value optimized out>, msg=0x20aa000) at rpc/virnetserver.c:170
#17 0x00007fd6f551c49c in virNetServerHandleJob (jobOpaque=<value optimized out>, 
    opaque=<value optimized out>) at rpc/virnetserver.c:191
#18 0x00007fd6f543ec4c in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:144
#19 0x00007fd6f543e539 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
#20 0x0000003700407851 in start_thread (arg=0x7fd6ebcc6700) at pthread_create.c:301
#21 0x00000037000e890d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Comment 13 Osier Yang 2013-03-13 10:30:49 UTC
Wondering how could I checked log of libvirt-0.10.2-9, and also used libvirt-debuginfo-0.10.2-9 to get the backtrace! Anyway, here is the full backstrace from the attached core file. Never mind my stupid mistake. :(

Program terminated with signal 11, Segmentation fault.
#0  0x0000003700076223 in malloc_init_state (av=0x7fd6c0000020) at malloc.c:2481
2481	    bin = bin_at(av,i);
#0  0x0000003700076223 in malloc_init_state (av=0x7fd6c0000020) at malloc.c:2481
#1  malloc_consolidate (av=0x7fd6c0000020) at malloc.c:5220
#2  0x0000003700079385 in _int_malloc (av=0x0, bytes=<value optimized out>) at malloc.c:4414
#3  0x000000370007a911 in __libc_malloc (bytes=1100) at malloc.c:3677
#4  0x00007fd6f54345ac in virReallocN (ptrptr=0x7fd6ebcc59b0, size=<value optimized out>, count=<value optimized out>)
    at util/memory.c:160
#5  0x00007fd6f5421a07 in virBufferGrow (buf=0x7fd6ebcc59a0, len=<value optimized out>) at util/buf.c:129
#6  0x00007fd6f5421f64 in virBufferVasprintf (buf=0x7fd6ebcc59a0, format=0x7fd6f558ea73 "<domain type='%s'", argptr=0x7fd6ebcc5610)
    at util/buf.c:322
#7  0x00007fd6f54220b8 in virBufferAsprintf (buf=<value optimized out>, format=<value optimized out>) at util/buf.c:295
#8  0x00007fd6f5471c52 in virDomainDefFormatInternal (def=0x7fd6d0009d30, flags=0, buf=0x7fd6ebcc59a0) at conf/domain_conf.c:13632
#0  0x0000003700076223 in malloc_init_state (av=0x7fd6c0000020) at malloc.c:2481
#1  malloc_consolidate (av=0x7fd6c0000020) at malloc.c:5220
#2  0x0000003700079385 in _int_malloc (av=0x0, bytes=<value optimized out>) at malloc.c:4414
#3  0x000000370007a911 in __libc_malloc (bytes=1100) at malloc.c:3677
#4  0x00007fd6f54345ac in virReallocN (ptrptr=0x7fd6ebcc59b0, size=<value optimized out>, count=<value optimized out>)
    at util/memory.c:160
#5  0x00007fd6f5421a07 in virBufferGrow (buf=0x7fd6ebcc59a0, len=<value optimized out>) at util/buf.c:129
#6  0x00007fd6f5421f64 in virBufferVasprintf (buf=0x7fd6ebcc59a0, format=0x7fd6f558ea73 "<domain type='%s'", argptr=0x7fd6ebcc5610)
    at util/buf.c:322
#7  0x00007fd6f54220b8 in virBufferAsprintf (buf=<value optimized out>, format=<value optimized out>) at util/buf.c:295
#8  0x00007fd6f5471c52 in virDomainDefFormatInternal (def=0x7fd6d0009d30, flags=0, buf=0x7fd6ebcc59a0) at conf/domain_conf.c:13632

It looks like we break the malloc structures by some means (overwritten somewhere?), and when trying to find out the proper free chrunk list to allocate the requested memory, it crashs on finding out the bin (the head of the proper chrunk list) from the av array first.

Comment 14 Osier Yang 2013-03-13 12:09:44 UTC
(In reply to comment #13)
> Wondering how could I checked log of libvirt-0.10.2-9, and also used
> libvirt-debuginfo-0.10.2-9 to get the backtrace! Anyway, here is the full
> backstrace from the attached core file. Never mind my stupid mistake. :(
> 
> Program terminated with signal 11, Segmentation fault.
> #0  0x0000003700076223 in malloc_init_state (av=0x7fd6c0000020) at
> malloc.c:2481
> 2481	    bin = bin_at(av,i);
> #0  0x0000003700076223 in malloc_init_state (av=0x7fd6c0000020) at
> malloc.c:2481
> #1  malloc_consolidate (av=0x7fd6c0000020) at malloc.c:5220
> #2  0x0000003700079385 in _int_malloc (av=0x0, bytes=<value optimized out>)
> at malloc.c:4414
> #3  0x000000370007a911 in __libc_malloc (bytes=1100) at malloc.c:3677
> #4  0x00007fd6f54345ac in virReallocN (ptrptr=0x7fd6ebcc59b0, size=<value
> optimized out>, count=<value optimized out>)
>     at util/memory.c:160
> #5  0x00007fd6f5421a07 in virBufferGrow (buf=0x7fd6ebcc59a0, len=<value
> optimized out>) at util/buf.c:129
> #6  0x00007fd6f5421f64 in virBufferVasprintf (buf=0x7fd6ebcc59a0,
> format=0x7fd6f558ea73 "<domain type='%s'", argptr=0x7fd6ebcc5610)
>     at util/buf.c:322
> #7  0x00007fd6f54220b8 in virBufferAsprintf (buf=<value optimized out>,
> format=<value optimized out>) at util/buf.c:295
> #8  0x00007fd6f5471c52 in virDomainDefFormatInternal (def=0x7fd6d0009d30,
> flags=0, buf=0x7fd6ebcc59a0) at conf/domain_conf.c:13632

Wrong copy-paste, the left part of the backtrace is:

#9  0x00000000004719f1 in qemuDomainDefFormatBuf (driver=<value optimized out>, def=0x7fd6d0009d30, flags=0, buf=<value optimized out>)
    at qemu/qemu_domain.c:1298
#10 0x0000000000471c85 in qemuDomainDefFormatXML (driver=<value optimized out>, def=<value optimized out>, flags=<value optimized out>)
    at qemu/qemu_domain.c:1317
#11 0x0000000000462011 in qemuDomainGetXMLDesc (dom=<value optimized out>, flags=0) at qemu/qemu_driver.c:5363
#12 0x00007fd6f54d1c2e in virDomainGetXMLDesc (domain=0x7fd6c00cf9c0, flags=0) at libvirt.c:4371
#13 0x000000000043f62b in remoteDispatchDomainGetXMLDesc (server=<value optimized out>, client=<value optimized out>, 
    msg=<value optimized out>, rerr=0x7fd6ebcc5b80, args=0x7fd6c00d53c0, ret=0x7fd6c00d5270) at remote_dispatch.h:2584
#14 remoteDispatchDomainGetXMLDescHelper (server=<value optimized out>, client=<value optimized out>, msg=<value optimized out>, 
    rerr=0x7fd6ebcc5b80, args=0x7fd6c00d53c0, ret=0x7fd6c00d5270) at remote_dispatch.h:2560
#15 0x00007fd6f551b162 in virNetServerProgramDispatchCall (prog=0x20a3610, server=0x2097f80, client=0x20a3df0, msg=0x20aa000)
    at rpc/virnetserverprogram.c:431
#16 virNetServerProgramDispatch (prog=0x20a3610, server=0x2097f80, client=0x20a3df0, msg=0x20aa000) at rpc/virnetserverprogram.c:304
#17 0x00007fd6f551bdfe in virNetServerProcessMsg (srv=<value optimized out>, client=0x20a3df0, prog=<value optimized out>, msg=0x20aa000)
    at rpc/virnetserver.c:170
#18 0x00007fd6f551c49c in virNetServerHandleJob (jobOpaque=<value optimized out>, opaque=<value optimized out>) at rpc/virnetserver.c:191
#19 0x00007fd6f543ec4c in virThreadPoolWorker (opaque=<value optimized out>) at util/threadpool.c:144
#20 0x00007fd6f543e539 in virThreadHelper (data=<value optimized out>) at util/threads-pthread.c:161
#21 0x0000003700407851 in start_thread (arg=0x7fd6ebcc6700) at pthread_create.c:301
#22 0x00000037000e890d in setfsuid () at ../sysdeps/unix/syscall-template.S:84

> 
> It looks like we break the malloc structures by some means (overwritten
> somewhere?), and when trying to find out the proper free chrunk list to
> allocate the requested memory, it crashs on finding out the bin (the head of
> the proper chrunk list) from the av array first.

Comment 15 Osier Yang 2013-03-13 12:17:07 UTC
(In reply to comment #14)
> (In reply to comment #13)
> > Wondering how could I checked log of libvirt-0.10.2-9, and also used
> > libvirt-debuginfo-0.10.2-9 to get the backtrace! Anyway, here is the full
> > backstrace from the attached core file. Never mind my stupid mistake. :(
> > 
> > Program terminated with signal 11, Segmentation fault.
> > #0  0x0000003700076223 in malloc_init_state (av=0x7fd6c0000020) at
> > malloc.c:2481
> > 2481	    bin = bin_at(av,i);
> > #0  0x0000003700076223 in malloc_init_state (av=0x7fd6c0000020) at
> > malloc.c:2481
> > #1  malloc_consolidate (av=0x7fd6c0000020) at malloc.c:5220
> > #2  0x0000003700079385 in _int_malloc (av=0x0, bytes=<value optimized out>)
> > at malloc.c:4414
> > #3  0x000000370007a911 in __libc_malloc (bytes=1100) at malloc.c:3677
> > #4  0x00007fd6f54345ac in virReallocN (ptrptr=0x7fd6ebcc59b0, size=<value
> > optimized out>, count=<value optimized out>)
> >     at util/memory.c:160
> > #5  0x00007fd6f5421a07 in virBufferGrow (buf=0x7fd6ebcc59a0, len=<value
> > optimized out>) at util/buf.c:129
> > #6  0x00007fd6f5421f64 in virBufferVasprintf (buf=0x7fd6ebcc59a0,
> > format=0x7fd6f558ea73 "<domain type='%s'", argptr=0x7fd6ebcc5610)
> >     at util/buf.c:322
> > #7  0x00007fd6f54220b8 in virBufferAsprintf (buf=<value optimized out>,
> > format=<value optimized out>) at util/buf.c:295
> > #8  0x00007fd6f5471c52 in virDomainDefFormatInternal (def=0x7fd6d0009d30,
> > flags=0, buf=0x7fd6ebcc59a0) at conf/domain_conf.c:13632
> 
> Wrong copy-paste, the left part of the backtrace is:
> 
> #9  0x00000000004719f1 in qemuDomainDefFormatBuf (driver=<value optimized
> out>, def=0x7fd6d0009d30, flags=0, buf=<value optimized out>)
>     at qemu/qemu_domain.c:1298
> #10 0x0000000000471c85 in qemuDomainDefFormatXML (driver=<value optimized
> out>, def=<value optimized out>, flags=<value optimized out>)
>     at qemu/qemu_domain.c:1317
> #11 0x0000000000462011 in qemuDomainGetXMLDesc (dom=<value optimized out>,
> flags=0) at qemu/qemu_driver.c:5363
> #12 0x00007fd6f54d1c2e in virDomainGetXMLDesc (domain=0x7fd6c00cf9c0,
> flags=0) at libvirt.c:4371
> #13 0x000000000043f62b in remoteDispatchDomainGetXMLDesc (server=<value
> optimized out>, client=<value optimized out>, 
>     msg=<value optimized out>, rerr=0x7fd6ebcc5b80, args=0x7fd6c00d53c0,
> ret=0x7fd6c00d5270) at remote_dispatch.h:2584
> #14 remoteDispatchDomainGetXMLDescHelper (server=<value optimized out>,
> client=<value optimized out>, msg=<value optimized out>, 
>     rerr=0x7fd6ebcc5b80, args=0x7fd6c00d53c0, ret=0x7fd6c00d5270) at
> remote_dispatch.h:2560
> #15 0x00007fd6f551b162 in virNetServerProgramDispatchCall (prog=0x20a3610,
> server=0x2097f80, client=0x20a3df0, msg=0x20aa000)
>     at rpc/virnetserverprogram.c:431
> #16 virNetServerProgramDispatch (prog=0x20a3610, server=0x2097f80,
> client=0x20a3df0, msg=0x20aa000) at rpc/virnetserverprogram.c:304
> #17 0x00007fd6f551bdfe in virNetServerProcessMsg (srv=<value optimized out>,
> client=0x20a3df0, prog=<value optimized out>, msg=0x20aa000)
>     at rpc/virnetserver.c:170
> #18 0x00007fd6f551c49c in virNetServerHandleJob (jobOpaque=<value optimized
> out>, opaque=<value optimized out>) at rpc/virnetserver.c:191
> #19 0x00007fd6f543ec4c in virThreadPoolWorker (opaque=<value optimized out>)
> at util/threadpool.c:144
> #20 0x00007fd6f543e539 in virThreadHelper (data=<value optimized out>) at
> util/threads-pthread.c:161
> #21 0x0000003700407851 in start_thread (arg=0x7fd6ebcc6700) at
> pthread_create.c:301
> #22 0x00000037000e890d in setfsuid () at
> ../sysdeps/unix/syscall-template.S:84
> 
> > 
> > It looks like we break the malloc structures by some means (overwritten
> > somewhere?), and when trying to find out the proper free chrunk list to
> > allocate the requested memory, it crashs on finding out the bin (the head of
> > the proper chrunk list) from the av array first.

It's really hard to known where we break the malloc structures, since I can't get the bug reproduced, I'm wondering if the bug still could be reproduced by the testing script, but with setting the environment variable MALLOC_CHECK_ set to 1 when starting libvirtd, before testing. It should be able to uncover the place which breaks the malloc structures (such as overwriting).

Comment 16 Osier Yang 2013-03-13 12:38:52 UTC
Guys, I set needinfo on you, to see if you are lucky to get it reproduced with MALLOC_CHECK_ set. The memory corruption is clearly prior to the crash, which is nearly impossible to find out in the big code lake. If you can't get it reproduced and provide more info, I'm going to close it as INSUFFICIENT_DATA.

Comment 17 Osier Yang 2013-03-14 11:43:36 UTC
Okay, given that the bug is really randomly, I'd think it's hard to reproduce again, and thus no more info. So closed.


Note You need to log in before you can comment on or make changes to this bug.