RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1000973 - libvirtd crash when starting guest after cold-plug interface with hostdev network
Summary: libvirtd crash when starting guest after cold-plug interface with hostdev net...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 1002669
TreeView+ depends on / blocked
 
Reported: 2013-08-26 08:51 UTC by hongming
Modified: 2013-11-21 09:09 UTC (History)
8 users (show)

Fixed In Version: libvirt-0.10.2-24.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1002669 (view as bug list)
Environment:
Last Closed: 2013-11-21 09:09:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
libvirtd debug log (1.25 MB, text/plain)
2013-08-26 08:53 UTC, hongming
no flags Details
gdb backtrace (10.27 KB, text/plain)
2013-08-27 04:03 UTC, hongming
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:1581 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2013-11-21 01:11:35 UTC

Description hongming 2013-08-26 08:51:27 UTC
Description of problem:
libvirtd crash when starting  guest after cold-plug interface with hostdev network


Version-Release number of selected component (if applicable):
libvirt-0.10.2-23.el6.x86_64 

How reproducible:
100%

Steps to Reproduce:
# virsh start r6
Domain r6 started

# virsh net-list --all
Name                 State      Autostart     Persistent
--------------------------------------------------
default              active     no            yes
hostdev-net1         active     no            yes

# lspci|grep 11:10
11:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
11:10.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
11:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
11:10.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
11:10.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
11:10.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
11:10.6 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
11:10.7 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)


# virsh net-dumpxml hostdev-net1
<network>
  <name>hostdev-net1</name>
  <uuid>c039ec41-79cd-9418-76bc-a74a7ce29715</uuid>
  <forward mode='hostdev' managed='yes'>
    <address type='pci' domain='0x0000' bus='0x11' slot='0x10' function='0x0'/>
    <address type='pci' domain='0x0000' bus='0x11' slot='0x10' function='0x1'/>
  </forward>
</network>

# virsh destroy r6
Domain r6 destroyed

# cat vfpool.xml
<interface type='network'>
   <source network='hostdev-net1'/>
</interface>

# virsh attach-device r6 vfpool.xml --config
Device attached successfully

# virsh start r6
error: Failed to start domain r6
error: End of file while reading data: Input/output error
error: One or more references were leaked after disconnect from the hypervisor
error: Failed to reconnect to the hypervisor


Actual results:
libvirtd crash 

Expected results:
libvirtd works fine 

Additional info:

Comment 1 hongming 2013-08-26 08:53:58 UTC
Created attachment 790352 [details]
libvirtd debug log

Comment 4 hongming 2013-08-27 04:03:32 UTC
Created attachment 790748 [details]
gdb backtrace

Comment 5 Peter Krempa 2013-08-29 08:58:03 UTC
Fixed upstream:

commit 50348e6edfa10ddb61929bf95a1c4820a9614e19
Author: Peter Krempa <pkrempa>
Date:   Tue Aug 27 19:06:18 2013 +0200

    qemu: Remove hostdev entry when freeing the depending network entry
    
    When using a <interface type="network"> that points to a network with
    hostdev forwarding mode a hostdev alias is created for the network. This
    allias is inserted into the hostdev list, but is backed with a part of
    the network object that it is connected to.
    
    When a VM is being stopped qemuProcessStop() calls
    networkReleaseActualDevice() which eventually frees the memory for the
    hostdev object. Afterwards when the domain definition is being freed by
    virDomainDefFree() an invalid pointer is accessed by
    virDomainHostdevDefFree() and may cause a crash of the daemon.
    
    This patch removes the entry in the hostdev list before freeing the
    depending memory to avoid this issue.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1000973

Comment 9 Xuesong Zhang 2013-09-09 04:31:46 UTC
Verify this bug with build libvirt-0.10.2-24.el6, it is fixed. So change the bug status to "verify".

Steps:
1. prepare one health guest.
# virsh list --all
 Id    Name                           State
----------------------------------------------------
 3     a                              running

2. prepare one hostdev network.
# virsh net-list --all
Name                 State      Autostart     Persistent
--------------------------------------------------
default              active     yes           yes
hostnet              active     yes           yes

# virsh net-dumpxml hostnet
<network>
  <name>hostnet</name>
  <uuid>6b49be3c-bb91-c16d-b475-2929678720f4</uuid>
  <forward mode='hostdev' managed='yes'>
    <address type='pci' domain='0x0000' bus='0x11' slot='0x10' function='0x3'/>
    <address type='pci' domain='0x0000' bus='0x11' slot='0x10' function='0x2'/>
  </forward>
</network>

3. check the vf on the host
# lspci|grep 82576
0e:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
0e:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
0f:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0f:10.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0f:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0f:10.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
10:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
10:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
11:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
11:10.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
11:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
11:10.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

4. prepare one xml like the following one:
# cat vfpool.xml 
<interface type='network'>
   <source network='hostdev-net'/>
</interface>

5. destroy the guest, and attach the interface to the guest with config option.
# virsh destroy a
Domain a destroyed

# virsh attach-device a vfpool.xml --config
Device attached successfully

6. start the guest.
# virsh start a
Domain a started

7. make sure the libvirtd is still running
# service libvirtd status
libvirtd (pid  27668) is running...

8. check the qemu process and dumpxml, make sure the vf is attached to the guest.
# virsh dumpxml a
......
<interface type='network'>
      <mac address='52:54:00:89:a4:43'/>
      <source network='hostnet'/>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </interface>
......

# ps -ef|grep qemu
qemu     31330     1 24 00:29 ?        00:00:28 /usr/libexec/qemu-kvm -name a ......-device pci-assign,host=11:10.3,id=hostdev0,configfd=24,bus=pci.0,addr=0x6 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

Comment 11 errata-xmlrpc 2013-11-21 09:09:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1581.html


Note You need to log in before you can comment on or make changes to this bug.