Bug 82599 - Significant IO degradation under CPU load.
Significant IO degradation under CPU load.
Product: Red Hat Linux
Classification: Retired
Component: kernel (Show other bugs)
i686 Linux
medium Severity medium
: ---
: ---
Assigned To: Arjan van de Ven
Brian Brock
Depends On:
  Show dependency treegraph
Reported: 2003-01-23 17:03 EST by simra
Modified: 2007-04-18 12:50 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2004-09-30 11:40:26 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description simra 2003-01-23 17:03:11 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.2.1) Gecko/20021130

Description of problem:
When CPU is loaded with a process, IO on other processes is negatively impacted
in a significant way.  It is possible that only NFS performance is affected.  

Specifically, I am compiling using g++.  About 50% of the headers used by my c++
file are on the local disk, with the rest on an NFS mounted filesystem. Without
any processor load, the time output of my compile command looks like this:
19.644u 0.732s 0:24.56 82.9%    0+0k 0+0io 1918pf+0w

That's 24seconds.  Now, I run the following command: 
perl -e 'while(1){$i+=3.141;}'

and recompile..
19.585u 0.701s 4:05.71 8.2%     0+0k 0+0io 1918pf+0w

4 minutes!  Let's renice +19 the perl process:
19.468u 0.660s 5:10.19 6.4%     0+0k 0+0io 1918pf+0w

5 minutes!

Now, I've tried this on both of the RH 2.4.18-14 and 2.4.18-19.8.0 kernels with
similar results.  On a custom-built 2.4.19 kernel on identical hardware, I don't
have this problem.

I had anecdotal evidence that increasing the IO load on the box actually
improved performance, but I haven't been able to reproduce it.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
Use the steps above.  My project is quite large and the include directives are
rather convoluted, so I can't provide source here, but one could try reproducing
this on the kernel source on an NFS mounted device.      

Actual Results:  Under a number-crunching load, compilation shouldn't take much
more than two or three times as long as when the machine is loaded.

Expected Results:  Instead, compilation took about 8 times as long.

Additional info:
Comment 1 simra 2003-01-23 18:25:46 EST
FYI, I've tried simple test cases on NFS mounts using find and cat, but I can't
reproduce the problem that way.  I suspect that the problem is with open(),
since g++ calls it frequently while locating headers.  I'll attach more info if
I can produce a simple test case.
Comment 2 simra 2003-01-27 11:45:35 EST
Ok, here's an update on the problem..  I'm using a C++ compuational geometry
package from CGAL.org.  It's immense and #include's hordes of headers.  If I
build it on the local disk and compile one of the examples while the CPU is
loaded, the compile time is reasonable, but if I do the same on NFS, it takes 4
or 5 times longer than building on NFS with no load on the CPU.  

I've verified this on another machine with identical hardware and kernel, and
also verified that it's not a problem on identical hardware with a clean kernel
built from kernel.org source.

There are two possibilities: open() is taking too long, even for ENOENT, or
read() is taking too long.  (or both, I guess that makes three possibilities). 
If you have any suggestions for how to properly benchmark the systems, I'm all ears.

If you want more info on how to reproduce, I'd be happy to provide it.  I should
point out that the CGAL package is the only one that I've been able to reproduce
this with, but afaik it's the only one I know of that relies on so many headers.
Comment 3 Bugzilla owner 2004-09-30 11:40:26 EDT
Thanks for the bug report. However, Red Hat no longer maintains this version of
the product. Please upgrade to the latest version and open a new bug if the problem

The Fedora Legacy project (http://fedoralegacy.org/) maintains some older releases, 
and if you believe this bug is interesting to them, please report the problem in
the bug tracker at: http://bugzilla.fedora.us/

Note You need to log in before you can comment on or make changes to this bug.