RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 684268 - virtio_net: missing schedule on oom [rhel-6.0.z]
Summary: virtio_net: missing schedule on oom [rhel-6.0.z]
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel
Version: 6.0
Hardware: Unspecified
OS: Unspecified
urgent
unspecified
Target Milestone: rc
: ---
Assignee: Frantisek Hrbata
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On: 676579
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-03-11 16:20 UTC by RHEL Program Management
Modified: 2013-01-11 03:52 UTC (History)
10 users (show)

Fixed In Version: kernel-2.6.32-71.21.1.el6
Doc Type: Bug Fix
Doc Text:
Intensive usage of resources on a guest lead to a failure of networking on that guest: packets could no longer be received. The failure occurred when a DMA (Direct Memory Access) ring was consumed before NAPI (New API; an interface for networking devices which makes use of interrupt mitigation techniques) was enabled which resulted in a failure to receive the next interrupt request. The regular interrupt handler was not affected in this situation (because it can process packets in-place), however, the OOM (Out Of Memory) handler did not detect the aforementioned situation and caused networking to fail. With this update, NAPI is subsequently scheduled for each napi_enable operation; thus, networking no longer fails under the aforementioned circumstances.
Clone Of:
Environment:
Last Closed: 2011-04-08 02:59:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2011:0421 0 normal SHIPPED_LIVE Important: kernel security and bug fix update 2011-04-08 02:56:45 UTC

Description RHEL Program Management 2011-03-11 16:20:48 UTC
This bug has been copied from bug #676579 and has been proposed
to be backported to 6.0 z-stream (EUS).

Comment 4 Chao Yang 2011-03-28 02:15:53 UTC
Reproduced on rhel6.0 guest with kernel:2.6.32-71.el6.x86_64.
Steps:
1) boot guest with 512M mem and virtio net, ping remote, network works fine.
2) run netserver inside guest.
3) on host, launch 2000 netperf clients in background to stress netserver.
#! /bin/sh
ip=$guest_ip
i=0
while [ $i -lt 2000 ]
do
netperf -H $ip -l 300 &
i=`expr $i + 1`
echo launch Client-No.$i 
done
4) ping guest
Actual Result: network lost, fail to ping remote host.
CLI:
/usr/libexec/qemu-kvm -M rhel6.0.0 -enable-kvm -m 512 -smp 2 -name rhel6.0 -uuid `uuidgen` -rtc base=localtime,clock=vm,driftfix=slew -no-kvm-pit-reinjection -boot c -drive file=/root/RHEL-Server-6.0-64.qcow2,if=none,id=drive-virtio-0-0,media=disk,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio-0-0,id=virt0-0-0 -netdev tap,id=hostnet1 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:40:01:31:e3 -usb -device usb-tablet,id=input1 -vnc :0 -monitor stdio -balloon none

-------------------------------------------------------------------------
Verified on guest kernel-2.6.32-71.23.1.el6.x86_64.rpm with same steps&CLI above, after stressing netserver, network in guest still works fine and can ping remote host.

Comment 5 errata-xmlrpc 2011-04-08 02:59:20 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2011-0421.html

Comment 6 Martin Prpič 2011-04-12 12:49:27 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
Intensive usage of resources on a guest lead to a failure of networking on that guest: packets could no longer be received. The failure occurred when a DMA (Direct Memory Access) ring was consumed before NAPI (New API; an interface for networking devices which makes use of interrupt mitigation techniques) was enabled which resulted in a failure to receive the next interrupt request. The regular interrupt handler was not affected in this situation (because it can process packets in-place), however, the OOM (Out Of Memory) handler did not detect the aforementioned situation and caused networking to fail. With this update, NAPI is subsequently scheduled for each napi_enable operation; thus, networking no longer fails under the aforementioned circumstances.


Note You need to log in before you can comment on or make changes to this bug.