Bug 455719 - Performance for Qlogic InfiniPath QLE7140 is poor
Summary: Performance for Qlogic InfiniPath QLE7140 is poor
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel
Version: 5.0
Hardware: x86_64
OS: Linux
low
low
Target Milestone: rc
: ---
Assignee: Red Hat Kernel Manager
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2008-07-17 12:22 UTC by Jonathan Barber
Modified: 2013-04-30 12:49 UTC (History)
0 users

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-04-30 12:49:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Jonathan Barber 2008-07-17 12:22:19 UTC
Performance of the Qlogic InfiniPath QLE7140 as measured by NetPIPE-3.7.1 using
the openib verbs is poor (~400Mb/s) by default on RHEL5.2 using kernel-2.6.18-8.el5.

Netpipe was version 3.7.1 from:
http://www.scl.ameslab.gov/netpipe/code/NetPIPE-3.7.1.tar.gz

The fix described here was applied:
http://lists.scl.ameslab.gov/pipermail/netpipe/2007-July/000102.html

Running the following netpipe command:
./NPibv -n 10 -h 10.0.102.2

returns at the most ~460Mb/s.

This appears in the kernel log on boot:
PCI: Enabling device 0000:0c:00.0 (0140 -> 0142)
PCI: Setting latency timer of device 0000:0c:00.0 to 64
mtrr: base or size exceeds the MTRR width
ib_ipath 0000:0c:00.0: mtrr_add()  WC for PIO bufs failed (-22)
ib_ipath 0000:0c:00.0: infinipath0: Write combining not enabled (err 22):
performance may be poor

# lspci -v -s 0000:0c:00.0
0c:00.0 InfiniBand: QLogic, Corp. InfiniPath PE-800 (rev 02)
        Subsystem: QLogic, Corp. InfiniPath PE-800
        Flags: bus master, fast devsel, latency 0, IRQ 90
        Memory at fc200000 (64-bit, non-prefetchable) [size=2M]
        Capabilities: [40] Power Management version 2
        Capabilities: [50] Message Signalled Interrupts: 64bit+ Queue=0/0 Enable+
        Capabilities: [70] Express Endpoint IRQ 0
        Capabilities: [100] Advanced Error Reporting

# cat /proc/mtrr
reg00: base=0x00000000 (   0MB), size=2048MB: write-back, count=1
reg01: base=0x80000000 (2048MB), size=1024MB: write-back, count=1
reg02: base=0x100000000 (4096MB), size=4096MB: write-back, count=1
reg03: base=0x200000000 (8192MB), size=8192MB: write-back, count=1
reg04: base=0x400000000 (16384MB), size=1024MB: write-back, count=1
reg05: base=0xbfc00000 (3068MB), size=   4MB: uncachable, count=1

Carrying out the following on all the netpipe nodes:
# echo "base=0xfc200000 size=0x200000 type=write-combining" >| /proc/mtrr

pushes netpipe performance to ~6700Mb/s

Version-Release number of selected component (if applicable):
Hardware is Dell 1950 with BIOS level 2.3.1, 16G RAM, dual Intel X5355

Applies to kernel-2.6.18-8.el5.

How reproducible:
Everytime.

Steps to Reproduce:
1. Boot machine
2. Run netpipe: ./NPibv -n 10 -h 10.0.102.2
  
Actual results:
~400Mb/s from openibv

Expected results:
~7000Mb/s

Comment 1 Jes Sorensen 2013-04-30 12:49:20 UTC
This BZ is well out of date - if you experience problems with recent versions
of RHEL, please open a fresh bugzilla.

Thanks,
Jes


Note You need to log in before you can comment on or make changes to this bug.