RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 846585 - [qemu-kvm] [hot-plug] qemu-process (RHEL6.3 guest) goes into D state during nic hot unplug (netdev_del hostnet1)
Summary: [qemu-kvm] [hot-plug] qemu-process (RHEL6.3 guest) goes into D state during n...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel
Version: 6.3
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: rc
: ---
Assignee: jason wang
QA Contact: GenadiC
URL:
Whiteboard:
: 851874 (view as bug list)
Depends On:
Blocks: 851444
TreeView+ depends on / blocked
 
Reported: 2012-08-08 07:51 UTC by GenadiC
Modified: 2018-11-29 20:59 UTC (History)
24 users (show)

Fixed In Version: kernel-2.6.32-301.el6
Doc Type: Bug Fix
Doc Text:
If a mirror or redirection action is configured to cause packets to go to another device, the classifier holds a reference count. However, it was previously assuming that the administrator cleaned up all redirections before removing. Packets were therefore dropped if the mirrored device was not present, and connectivity to the host could be lost. To prevent such problems, a notifier and cleanup are now run during the unregister action. Packets are not dropped if the a mirrored device is not present.
Clone Of:
Environment:
Last Closed: 2013-02-21 06:45:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logs (15.27 MB, application/x-tar)
2012-08-08 07:58 UTC, GenadiC
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2013:0496 0 normal SHIPPED_LIVE Important: Red Hat Enterprise Linux 6 kernel update 2013-02-20 21:40:54 UTC

Description GenadiC 2012-08-08 07:51:02 UTC
Description of problem:

we have a case in which qemu process (guest, rhel6.3 latest) goes into Dl state after nic deactivation.
in some other cases, host looses network connectivity after same deactivation.

the following message start to appear on host console: 

Message from syslogd@orchid-vds1 at Aug  8 10:35:39 ...
kernel:unregister_netdevice: waiting for vnet1 to become free. Usage count = 2

we started systemtap debugger to capture the qemu-commands between libvirt and qemu-process (attached), the last command sent to the qemu-process is:

 219.898 > 0x7f57f8010fe0 {"execute":"device_del","arguments":{"id":"net2"},"id":"libvirt-70"}
219.899 < 0x7f57f8010fe0 {"return": {}, "id": "libvirt-70"}
219.899 > 0x7f57f8010fe0 {"execute":"netdev_del","arguments":{"id":"hostnet2"},"id":"libvirt-71"}
219.933 < 0x7f57f8010fe0 {"return": {}, "id": "libvirt-71"}
220.298 > 0x7f57f8010fe0 {"execute":"query-balloon","id":"libvirt-72"}
220.299 < 0x7f57f8010fe0 {"return": {"actual": 1073741824}, "id": "libvirt-72"}
220.347 > 0x7f57f8010fe0 {"execute":"device_del","arguments":{"id":"net1"},"id":"libvirt-73"}
220.348 < 0x7f57f8010fe0 {"return": {}, "id": "libvirt-73"}
220.348 > 0x7f57f8010fe0 {"execute":"netdev_del","arguments":{"id":"hostnet1"},"id":"libvirt-74"

process command-line:

LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -S -M rhel6.3.0 -cpu Conroe -enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1 -name RHEL6_1 -uuid d41a5e62-582b-4d52-8326-8335a94ed77c -smbios type=1,manufacturer=Red Hat,product=RHEV Hypervisor,version=6Server-6.3.0.3.el6,serial=06577002-2B21-3E45-9926-E52BFFFF3659_00:14:5E:17:D5:B0,uuid=d41a5e62-582b-4d52-8326-8335a94ed77c -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/RHEL6_1.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2012-08-07T19:08:27,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x7 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/47ee94d0-cfcd-44a6-9a8f-9cec41868ae5/85d077a4-1992-4030-9393-678e397a31e8/images/7759d507-99c0-43e0-8d52-e86af5327d0a/15b4cdcc-9668-45f9-b680-45a671dc45ad,if=none,id=drive-virtio-disk0,format=qcow2,serial=7759d507-99c0-43e0-8d52-e86af5327d0a,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:23:46:0a,bus=pci.0,addr=0x3 -netdev tap,fd=31,id=hostnet1,vhost=on,vhostfd=32 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:1a:4a:23:46:23,bus=pci.0,addr=0x4 -netdev tap,fd=33,id=hostnet2,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostnet2,id=net2,mac=00:1a:4a:23:46:c5,bus=pci.0,addr=0x6 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/RHEL6_1.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev spicevmc,id=charchannel1,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0 -chardev pty,id=charconsole0 -device virtconsole,chardev=charconsole0,id=console0 -spice port=5900,tls-port=5901,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=inputs -k en-us -vga qxl -global qxl-vga.vram_size=67108864 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8
char device redirected to /dev/pts/2

Comment 1 GenadiC 2012-08-08 07:57:42 UTC
host:

packages:

libvirt-0.9.10-21.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.298.el6_3.x86_64
vdsm-4.9.6-26.0.el6_3.x86_64

kernel: 2.6.32-279.el6.x86_64

guest:

[root@e ~]# lsmod | grep virtio
virtio_balloon          4856  0 
virtio_console         18027  0 
virtio_net             16760  0 
virtio_blk              7292  3 
virtio_pci              7113  0 
virtio_ring             7729  5 virtio_balloon,virtio_console,virtio_net,virtio_blk,virtio_pci
virtio                  4890  5 virtio_balloon,virtio_console,virtio_net,virtio_blk,virtio_pci

kernel: 2.6.32-279.el6.x86_64

Comment 2 GenadiC 2012-08-08 07:58:24 UTC
Created attachment 602959 [details]
logs

Comment 3 Dor Laor 2012-08-08 09:15:38 UTC
- Does it happens each time or not that often?
- Can you please repeat the exact same scenario but w/ virsh commands?
  The KVM qe do these tests on a regular basis and we haven't got similar 
  reports.

Comment 4 Haim 2012-08-08 15:47:36 UTC
(In reply to comment #3)
> - Does it happens each time or not that often?

it happens each time for specific guests (domains) managed by vdsm and libvirt.
> - Can you please repeat the exact same scenario but w/ virsh commands?
yes.

reproduction steps:

use the following network xml files:

vnet0.xml:

<interface type='bridge'>
  <mac address='00:1a:4a:23:46:0a'/>
  <source bridge='rhevm'/>
  <target dev='vnet0'/>
  <model type='virtio'/>
  <alias name='net0'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
 </interface>

vnet1.xml:
                                                                                                                                                                                                                     
<interface type='bridge'>                                                                                                                                                                                            
  <mac address='00:1a:4a:23:46:23'/>                                                                                                                                                                                 
  <source bridge='VM_VLAN12'/>                                                                                                                                                                                       
  <target dev='vnet1'/>                                                                                                                                                                                              
  <model type='virtio'/>                                                                                                                                                                                             
  <alias name='net1'/>                                                                                                                                                                                               
  <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>                                                                                                                                        
</interface>                                                                                                                                                                                                         

vnet2.xml:
                                                                                                                                                                                                                     
<interface type='bridge'>                                                                                                                                                                                            
  <mac address='00:1a:4a:23:46:c5'/>                                                                                                                                                                                 
  <source bridge='VM_VLAN12'/>                                                                                                                                                                                       
  <target dev='vnet2'/>                                                                                                                                                                                              
  <model type='virtio'/>
  <alias name='net2'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</interface>

start vm using the following xml:

<domain type="kvm">
        <name>RHEL6_1</name>
        <uuid>d41a5e62-582b-4d52-8326-8335a94ed77c</uuid>
        <memory>1048576</memory>
        <currentMemory>1048576</currentMemory>
        <vcpu>1</vcpu>
        <devices>
                <channel type="unix">
                        <target name="com.redhat.rhevm.vdsm" type="virtio"/>
                        <source mode="bind" path="/var/lib/libvirt/qemu/channels/RHEL6_1.com.redhat.rhevm.vdsm"/>
                </channel>
                <input bus="ps2" type="mouse"/>
                <channel type="spicevmc">
                        <target name="com.redhat.spice.0" type="virtio"/>
                </channel>
                <graphics autoport="yes" keymap="en-us" listen="0" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1" type="spice">
                        <channel mode="secure" name="main"/>
                        <channel mode="secure" name="inputs"/>
                </graphics>
                <console type="pty">
                        <target port="0" type="virtio"/>
                </console>
                <controller type="usb">
                        <address  domain="0x0000"  function="0x2"  slot="0x01"  type="pci" bus="0x00"/>
                </controller>
                <video>
                        <address  domain="0x0000"  function="0x0"  slot="0x02"  type="pci" bus="0x00"/>
                        <model heads="1" type="qxl" vram="65536"/>
                </video>
                <interface type="bridge">
                        <mac address="00:1a:4a:23:46:0a"/>
                        <model type="virtio"/>
                        <source bridge="rhevm"/>
                </interface>
                <interface type="bridge">
                        <address  domain="0x0000"  function="0x0"  slot="0x04"  type="pci" bus="0x00"/>
                        <mac address="00:1a:4a:23:46:23"/>
                        <model type="virtio"/>
                        <source bridge="VM_VLAN12"/>
                </interface>
                <interface type="bridge">
                        <address  domain="0x0000"  function="0x0"  slot="0x06"  type="pci" bus="0x00"/>
                        <mac address="00:1a:4a:23:46:c5"/>
                        <model type="virtio"/>
                        <source bridge="VM_VLAN12"/>
                </interface>
                <memballoon model="virtio"/>
                <disk device="cdrom" snapshot="no" type="file">
                        <address  bus="1"  controller="0"  target="0"  type="drive" unit="0"/>
                        <source file="" startupPolicy="optional"/>
                        <target bus="ide" dev="hdc"/>
                        <readonly/>
                        <serial></serial>
                </disk>
                <disk device="disk" snapshot="no" type="block">
                        <address  domain="0x0000"  function="0x0"  slot="0x05"  type="pci" bus="0x00"/>
                        <source dev="/rhev/data-center/47ee94d0-cfcd-44a6-9a8f-9cec41868ae5/85d077a4-1992-4030-9393-678e397a31e8/images/7759d507-99c0-43e0-8d52-e86af5327d0a/15b4cdcc-9668-45f9-b680-45a671dc45ad"/>
                        <target bus="virtio" dev="vda"/>
                        <serial>7759d507-99c0-43e0-8d52-e86af5327d0a</serial>
                        <boot order="1"/>
                        <driver cache="none" error_policy="stop" io="native" name="qemu" type="qcow2"/>
                </disk>
        </devices>
        <os>
                <type arch="x86_64" machine="rhel6.3.0">hvm</type>
                <smbios mode="sysinfo"/>
        </os>
        <sysinfo type="smbios">
                <system>
                        <entry name="manufacturer">Red Hat</entry>
                        <entry name="product">RHEV Hypervisor</entry>
                        <entry name="version">6Server-6.3.0.3.el6</entry>
                        <entry name="serial">936B24D8-3EA3-3A6C-AD2E-84D35C84B839_00:14:5E:17:D0:38</entry>
                        <entry name="uuid">d41a5e62-582b-4d52-8326-8335a94ed77c</entry>
                </system>
        </sysinfo>
        <clock adjustment="-43200" offset="variable">
                <timer name="rtc" tickpolicy="catchup"/>
        </clock>
        <features>
                <acpi/>
        </features>
        <cpu match="exact">
                <model>Conroe</model>
                <topology cores="1" sockets="1" threads="1"/>
        </cpu>
</domain>

run the following detach command using libvirt (virsh):

virsh detach-device <domain> vnet0.xml

lost connectivity to host

Comment 18 Jarod Wilson 2012-08-24 19:27:52 UTC
Patch(es) available on kernel-2.6.32-301.el6

Comment 21 Meni Yakove 2012-08-26 11:54:41 UTC
Verified with kernel-2.6.32-301.el6.x86_64.

Comment 22 jason wang 2012-08-27 11:27:04 UTC
*** Bug 851874 has been marked as a duplicate of this bug. ***

Comment 24 errata-xmlrpc 2013-02-21 06:45:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-0496.html


Note You need to log in before you can comment on or make changes to this bug.