Bug 672910

Summary: What is RH's opinion and reason for needing IRQ balancing on.
Product: Red Hat Enterprise Linux 6 Reporter: Travis Gummels <tgummels>
Component: irqbalanceAssignee: Anton Arapov <anton>
Status: CLOSED NOTABUG QA Contact: Red Hat Kernel QE team <kernel-qe>
Severity: medium Docs Contact:
Priority: medium    
Version: 6.1CC: nobody, woodard
Target Milestone: rcFlags: tgummels: needinfo+
Target Release: 6.1   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2011-01-26 17:38:54 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Travis Gummels 2011-01-26 17:18:04 UTC
Description of problem:

LLNL is receiving conflicting recommendations regarding using irqbalance. Q-Logic claims locking their driver to one socket across 4 cpus gives better 
performance.  LLNL found that they loose some performace on the qlogic IB card when running with irqbalance vs. Q-Logic's recommended way.  That said, with irqbalance on, the mellanox card worked better because it wasn't sharing cpu0 with everything else for irqs.  LLNL's issue is with irqbalance off, the other cards like onboard copper Ethernet, 10GigE and Mellanox cards then end up getting locked to cpu0 unless they write scripts to move them around. LLNL is looking for Red Hat's opinion on this.

In a discussion with Ben Woodard (LLNL DEE) : 

<neb> re: 00370142 I never heard a definitive answer on that one. You had your idea of how it might work.
<gummels> I sent trent an email with some testing ideas
<neb> so I think that one needs to go up to get a more definitive answer than either you or I can give.
<gummels> ok on that
<neb> We've looked at it as much as possible and exhausted out knowledge base. We would like a statement about how it should work in RHEL6 with the kinds of cards we have and the kind of mbs we have.  A lot has changed over the years with these NUMA machines and the resources we draw on are out of date.   We need to go to the performance group and the guy who works on irqbalance and get some more informed opinions.

Version-Release number of selected component (if applicable):

irqbalance-0.55-27.el6.x86_64

qib-1.5.2-20  -- As of 11/05/2010

How reproducible:

This can be reproduced on LLNL's test equipment.

Actual results:

Better performance with irqbalance off and Q-Logics driver locked to one socket (socket 0 cores 0-4 of a 6 core cpu).

Expected results:

Unknown, looking to Red Hat irqbalance maintainer for feedback.

Additional info:

Comment 1 RHEL Program Management 2011-01-26 17:28:19 UTC
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated
in the current release, Red Hat is unfortunately unable to
address this request at this time. Red Hat invites you to
ask your support representative to propose this request, if
appropriate and relevant, in the next release of Red Hat
Enterprise Linux. If you would like it considered as an
exception in the current release, please ask your support
representative.