Bug 105718
Summary: | (NET E1000) driver gives poor performance with jumbo frames | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 2.1 | Reporter: | Brian Feeny <signal> | ||||
Component: | kernel | Assignee: | John W. Linville <linville> | ||||
Status: | CLOSED WORKSFORME | QA Contact: | Brian Brock <bbrock> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | 2.1 | CC: | davem, jbaron, k.georgiou, riel, scott.feldman | ||||
Target Milestone: | --- | ||||||
Target Release: | --- | ||||||
Hardware: | i686 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2005-03-15 14:52:43 UTC | Type: | --- | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Brian Feeny
2003-09-26 20:10:26 UTC
Here is some info about how the machines are configured, and some notes about the tests: The NFS Clients are mounting the filesystem as follows: /etc/fstab: mx-nfs.mx:/home/cust /home/cust nfs hard,intr,rsize=8192,wsize=8192 0 0 The NFS Server has the filesystem setup as follows: /etc/fstab: /dev/sdb1 /home/cust ext3 defaults,data= journal,noatime 1 14 /etc/exports: /home/cust 192.168.10.0/255.255.255.0(rw,no_root_squash) The server shows the following for nfsstat after many tests: root@mx-nfs network-scripts]# /usr/sbin/nfsstat Warning: /proc/net/rpc/nfs: No such file or directory Server rpc stats: calls badcalls badauth badclnt xdrcall 87948134 12 12 0 0 Server nfs v3: null getattr setattr lookup access readlink 0 0% 13796 0% 630 0% 10077 0% 33061 0% 0 0% read write create mkdir symlink mknod 45134282 2% 35832835 40% 1260 0% 0 0% 0 0% 0 0% remove rmdir rename link readdir readdirplus 1260 0% 0 0% 0 0% 0 0% 9 0% 0 0% fsstat fsinfo pathconf commit 8869 0% 8869 0% 0 0% 6903186 7% The client shows the following for nfsstat after many tests: Client rpc stats: calls retrans authrefrsh 166801290 1981 0 Client nfs v3: null getattr setattr lookup access readlink 0 0% 13823 0% 630 0% 10085 0% 33100 0% 0 0% read write create mkdir symlink mknod 45134281 2% 35831152 40% 1260 0% 0 0% 0 0% 0 0% remove rmdir rename link readdir readdirplus 1260 0% 0 0% 0 0% 0 0% 11 0% 0 0% fsstat fsinfo pathconf commit 8873 0% 8873 0% 0 0% 6903186 7% The server network adapter is as follows after many tests: [root@mx-nfs network-scripts]# netstat -i Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth1 1500 070949182 0 0 080169808 0 0 0 BMRU The client network adapter is as follows after many tests: [root@mx-admin NEW]# netstat -i Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth1 1500 0177924705 12 0 0159498604 0 0 0 BMRU Notes: 1. The NFS Server is set to run with RPCNFSDCOUNT=15 in /etc/rc.d/init.d/nfs 2. The client and server are connected via back to back Cat5e patch cable during the tests 3. The client and server are both set to the same MTU during the respective tests. This is verified by ifconfig -a and a successfull "/usr/sbin/tracepath server/2049" from the client. here is an update: I ran the above test with the e1000 driver, MTU 9000 and 8192 blocksize, but this time I changed some driver parameters. This is how I set the parameters: lient: options e1000 Speed=1000 Duplex=2 RxDescriptors=256 TxAbsIntDelay=0 server: options e1000 Speed=1000 Duplex=2 RxDescriptors=256 TxAbsIntDelay=0 InterruptThrottleRate=0 My reasoning was that I wanted to make them match on both sides. They use different RxDescriptor and TxAbsIntDelay on the 4.x and 5.x drivers, this way they are the same. I also disabled the Dynamic Interrupt Throttling of the 5.x driver on the server side. The results are a substantial improvement. I think this is a step in the right direction but I am ignorant as to exactly how to proceed. If I can attach files to this bug I will attach a tar showing all the test results including this last one which I call "custom1", because i used custom /etc/modules.conf parameters. My next tests will disable hyperthreading and smp all together too see if it has something to do with the smp_affinity. Brian Created attachment 94768 [details]
test results from NFS benchmark
These are the test results of the iozone nfs tests. Included is a readme file
that serves as an index.
Brian, This one just found its way to me... Are you still using 2.1? Have you picked-up the latest updates? (The e1000 driver has received a number of updates since this was reported...) Would you mind re-running your tests with the latest 2.1 update (U7) and reporting the results? Closed due to lack of respons. Please attempt to recreate with the latest available AS2.1 update and reopen if the problem persists. |