Bug 1252473

Summary: Libvirt crashed on virsh domiftune
Product: Red Hat Enterprise Linux 7 Reporter: Michal Privoznik <mprivozn>
Component: libvirtAssignee: Michal Privoznik <mprivozn>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 7.2CC: dyuan, fjin, honzhang, mzhan, rbalakri
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: libvirt-1.2.17-6.el7 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-11-19 06:50:59 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Michal Privoznik 2015-08-11 13:58:17 UTC
Description of problem:

I had a domain with some <bandwidth/> set (@floor was among the fields set). Then, after I started domain and started some network traffic (e.g. yum update) the bandwidth limiting was too tight so I wanted to remove it temporarily (virsh domiftune $dom $net --inbound 0). Later, when shutting down the domain, libvirtd crashed.


Version-Release number of selected component (if applicable):
libvirt-1.2.17-3.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Create a domain with the following interface:
    <interface type='network' trustGuestRxFilters='yes'>
      <mac address='52:54:00:d6:c0:0b'/>
      <source network='default'/>
      <bandwidth>
        <inbound average='100' floor='50'/>
        <outbound average='100'/>
      </bandwidth>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>


2. After the domain is started, use 'virsh domiftune $domain $interface --inbound 0' to clear out the bandwidth

3. Shutdown the domain 'virsh shutdown $domain'

4. Observe crashed daemon.

Actual results:
Daemon crashes.

Expected results:
Daemon does not crash and updates the bandwidth correctly.

Additional info:

Comment 5 hongming 2015-08-24 06:51:57 UTC
It still can be reproduced as follows.

# rpm -q libvirt
libvirt-1.2.17-5.el7.x86_64

# virsh list
 Id    Name                           State
----------------------------------------------------
 3     r7.1                           running


# virsh dumpxml r7.1|grep /interface -B11
    <interface type='network'>
      <mac address='52:54:00:5a:ab:c5'/>
      <source network='default' bridge='virbr0'/>
      <bandwidth>
        <inbound average='1000' peak='5000' floor='200' burst='1024'/>
        <outbound average='128' peak='256' burst='256'/>
      </bandwidth>
      <target dev='vnet0'/>
      <model type='rtl8139'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>


# virsh domiftune r7.1 vnet0 --inbound 0

# virsh shutdown r7.1
Domain r7.1 is being shutdown


# virsh list
error: failed to connect to the hypervisor
error: no valid connection
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Connection refused


# systemctl status libvirtd -l
 libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
   Active: failed (Result: start-limit) since Mon 2015-08-24 02:41:31 EDT; 8min ago
     Docs: man:libvirtd(8)
           http://libvirt.org
  Process: 7545 ExecStart=/usr/sbin/libvirtd $LIBVIRTD_ARGS (code=killed, signal=SEGV)
 Main PID: 7545 (code=killed, signal=SEGV)
   CGroup: /system.slice/libvirtd.service
           ├─6951 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
           └─6952 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

Aug 24 02:41:30 hongmingtest.nay.redhat.com systemd[1]: Unit libvirtd.service entered failed state.
Aug 24 02:41:30 hongmingtest.nay.redhat.com systemd[1]: libvirtd.service failed.
Aug 24 02:41:31 hongmingtest.nay.redhat.com systemd[1]: libvirtd.service holdoff time over, scheduling restart.
Aug 24 02:41:31 hongmingtest.nay.redhat.com systemd[1]: start request repeated too quickly for libvirtd.service
Aug 24 02:41:31 hongmingtest.nay.redhat.com systemd[1]: Failed to start Virtualization daemon.
Aug 24 02:41:31 hongmingtest.nay.redhat.com systemd[1]: Unit libvirtd.service entered failed state.
Aug 24 02:41:31 hongmingtest.nay.redhat.com systemd[1]: libvirtd.service failed.

Comment 6 hongming 2015-08-24 07:48:29 UTC
Please ignore the Comment 5. The version is wrong in it.

Comment 7 hongming 2015-08-24 08:29:33 UTC
Verify it as follows. The result is expected. Move its status to VERIFIED.


[root@hongmingtest images]# rpm -q libvirt
libvirt-1.2.17-6.el7.x86_64

[root@hongmingtest images]# virsh start r7.1
Domain r7.1 started

[root@hongmingtest images]# virsh dumpxml r7.1|grep /interface -B11
    <interface type='network'>
      <mac address='52:54:00:5a:ab:c5'/>
      <source network='default' bridge='virbr0'/>
      <bandwidth>
        <inbound average='1000' peak='5000' floor='200' burst='1024'/>
        <outbound average='128' peak='256' burst='256'/>
      </bandwidth>
      <target dev='vnet0'/>
      <model type='rtl8139'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

[root@hongmingtest images]# virsh domiftune r7.1 vnet0 --inbound 0

[root@hongmingtest images]# virsh shutdown r7.1
Domain r7.1 is being shutdown

[root@hongmingtest images]# virsh list --all
 Id    Name                           State
----------------------------------------------------
 -     r7.1                           shut off

[root@hongmingtest images]# virsh start r7.1
Domain r7.1 started

[root@hongmingtest images]# virsh domiftune r7.1 vnet0 --outbound 0

[root@hongmingtest images]# virsh dumpxml r7.1|grep /interface -B11
    </controller>
    <interface type='network'>
      <mac address='52:54:00:5a:ab:c5'/>
      <source network='default' bridge='virbr0'/>
      <bandwidth>
        <inbound average='1000' peak='5000' floor='200' burst='1024'/>
      </bandwidth>
      <target dev='vnet0'/>
      <model type='rtl8139'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

[root@hongmingtest images]# virsh shutdown r7.1
Domain r7.1 is being shutdown

[root@hongmingtest images]# virsh list --all
 Id    Name                           State
----------------------------------------------------
 -     r7.1                           shut off

[root@hongmingtest images]# virsh start r7.1
Domain r7.1 started

[root@hongmingtest images]# virsh dumpxml r7.1|grep /interface -B11
    <interface type='network'>
      <mac address='52:54:00:5a:ab:c5'/>
      <source network='default' bridge='virbr0'/>
      <bandwidth>
        <inbound average='1000' peak='5000' floor='200' burst='1024'/>
        <outbound average='128' peak='256' burst='256'/>
      </bandwidth>
      <target dev='vnet0'/>
      <model type='rtl8139'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

[root@hongmingtest images]# virsh domiftune r7.1 vnet0 --outbound 0

[root@hongmingtest images]# virsh domiftune r7.1 vnet0 --inbound 0


[root@hongmingtest images]# virsh dumpxml r7.1|grep /interface -B7
    <interface type='network'>
      <mac address='52:54:00:5a:ab:c5'/>
      <source network='default' bridge='virbr0'/>
      <target dev='vnet0'/>
      <model type='rtl8139'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

[root@hongmingtest images]# virsh shutdown r7.1
Domain r7.1 is being shutdown

[root@hongmingtest images]# virsh list --all
 Id    Name                           State
----------------------------------------------------
 -     r7.1                           shut off

Comment 9 errata-xmlrpc 2015-11-19 06:50:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2202.html