Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 67625 - Bad NFS performance in 2.4.18-5
Bad NFS performance in 2.4.18-5
Product: Red Hat Linux
Classification: Retired
Component: kernel (Show other bugs)
i686 Linux
medium Severity medium
: ---
: ---
Assigned To: Arjan van de Ven
Brian Brock
Depends On:
  Show dependency treegraph
Reported: 2002-06-28 10:16 EDT by diego.santacruz
Modified: 2014-01-21 17:48 EST (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2004-09-30 11:39:43 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description diego.santacruz 2002-06-28 10:16:51 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 Galeon/1.2.5 (X11; Linux i686; U;) Gecko/20020606

Description of problem:
I recently upgraded to the 2.4.18-5 kernel, and now I experience serious
performance problems with NFS v3 reads and/or writes, depending on the server.

With a Solaris 8 server machine (SunOS 5.8 Generic_108528-12 sun4u sparc
SUNW,Ultra-4) when I NFS mount with the default rsize (8192 now I believe) or
any rsize of 4096 or above, the read performance is very low (around
The 'nfsstat -c -r' command reports rapidly increasing RPC retransmissions: the
rate is 1.3 times the calls rate.
If I mount with rsize=2048 everything works out OK, and I have quite a good read
performance. The write performance is OK with various wsize options (I tested
1024, 2048, 4096 and 8192). This server has a 100Mbit connection. This server
worked flawlessly with rsize and wsize set to 8192 with kernel 2.4.18-4.

Now with an SGI server (IRIX64 6.5 10120105 IP25) the read performance is OK.
However the write performance is very very low (network traffic in the order of
32Kbytes per second) if I set wsize to 2048 or less. For 4096 or more the write
performance is OK. Looking at the network traffic with ethereal I remarked that
when wsize is 2048, each NFS write call is 512 bytes only (why not 2048?) and
that the reply always says "Commited: FILE_SYNC", no rpc retransmissions happen.
However with wsize of 4096 each NFS write is 4096 bytes and the answer always
says "Commited: UNSTABLE". I don't know if this scenario worked OK under
2.4.18-4. The server has a 10Mbit connection. The rsize setting has no notable
effect on the read performance, which is always good.

Version-Release number of selected component (if applicable): 2.4.18-5

How reproducible:

Steps to Reproduce:
Scenario 1:

1. mount share from Solaris server with rsize=8192
2. copy a file from the server -> bad performance (~300 Kbytes/sec)
3. umount share
4. mount share from Solaris server with rsize=2048
5. copy a file from the server -> good performance (~677 Kbytes/sec)

Scenario 2:
1. mount share from SGI server with wsize=2048
2. copy file to server -> extermely bad performance (~30Kbytes/sec)
3. umount
4. mount share from SGI server with wsize=4096
3. copy file to server -> good performance (~466Kbytes/sec)

Actual Results:  Solaris server:
Bad read performance with lots of rpc retransmissions if rsize is more than 2048
(no badcalls seen at server). The amount of rpc retransmissions is about 1.3
times the amount of rpc calls.

IRIX server:
Extermely bad write performance if wsize is less than 4096. No rpc
retransmissions in this case (no badcalls reported on server either).

Expected Results:  Good read and write NFS performance.

Additional info:

My machine has a 100Mbit connection, but through a PCMCIA card (so max
performance is around the 12 Mbits per second I beleive).
My network card is a Xircom CreditCard Ethernet 10/100+ Modem 56 PCMCIA card
(CEM56-100) (driver xirc2ps_cs).
Comment 1 diego.santacruz 2002-06-28 10:17:56 EDT
Forgot to mention, but if ethereal network traces are useful I can easily
provide those.
Comment 2 Jeremy Sanders 2002-07-05 05:13:44 EDT
Yes, RedHat 2.4.18-5 kernel has awful read performance as a client (as opposed
to stock 2.4.17, 2.4.19rc1). Here are some bonnie++ benchmarks to a linux server
(served kernel has no effect on results).

Over nfs:

Version 1.02a       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
xpc1.ast.cam.ac. 1G  9242  52  9452   7   644   0  1624   8  1731   1 488.7   3
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  1207   3   182  96  2157   7  1224   3  4521   5  1766   4


Version 1.02a       ------Sequential Output------ --Sequential Input----Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
xserv1.ast.cam.a 2G 22311  96 40600  13 16150   4 24853  96 45830   5 323.0   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  2715  98 +++++ +++ +++++ +++  2784  98 +++++ +++  8798  99

Look at the nfs read speeds and rewrite speeds. Using a 2.4.17 or 2.4.19rc1
stock kernel on the client gives ~10MB/s read and write speeds (rather than
10MB/s write and 0.5MB/s read).
Comment 3 Jeremy Sanders 2002-07-05 05:16:10 EDT
Just to add that the machines in the above benchmark are connected over 100Mps
ethernet with Intel eepro100 cards. Mounting options are:

Comment 4 Bugzilla owner 2004-09-30 11:39:43 EDT
Thanks for the bug report. However, Red Hat no longer maintains this version of
the product. Please upgrade to the latest version and open a new bug if the problem

The Fedora Legacy project (http://fedoralegacy.org/) maintains some older releases, 
and if you believe this bug is interesting to them, please report the problem in
the bug tracker at: http://bugzilla.fedora.us/

Note You need to log in before you can comment on or make changes to this bug.