Bug 219464 - nfs service on server increases abnormally high iowait
nfs service on server increases abnormally high iowait
Status: CLOSED NOTABUG
Product: Red Hat Enterprise Linux 3
Classification: Red Hat
Component: kernel (Show other bugs)
3.0
All Linux
medium Severity medium
: ---
: ---
Assigned To: Jeff Layton
:
: 219465 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2006-12-13 06:46 EST by Rajdeep Sengupta
Modified: 2007-11-30 17:07 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2007-07-17 06:54:11 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Rajdeep Sengupta 2006-12-13 06:46:05 EST
Description of problem:
The machine shows 90% iowait, and thus CPU is heavly utilized though no heavy 
process is running on the system. We have found that this happens once the nfs 
service gets started.

Version-Release number of selected component (if applicable):
2.4.21-47.ELsmp
nfs-utils-1.0.6-44EL

How reproducible:
boot the machine and run top
 17:13:07  up 19:55,  2 users,  load average: 1.03, 1.45, 1.51
63 processes: 62 sleeping, 1 running, 0 zombie, 0 stopped
CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle
           total    0.2%    0.0%    0.5%   0.2%     0.3%   81.1%   17.7%
           cpu00    0.2%    0.0%    0.2%   0.2%     0.6%   78.6%   20.2%
           cpu01    0.2%    0.0%    0.8%   0.2%     0.0%   83.6%   15.2%
Mem:  2055236k av, 1894172k used,  161064k free,       0k shrd,  434180k buff
                   1332488k actv,  113116k in_d,   28584k in_c
Swap: 2096440k av,       0k used, 2096440k free                 1081172k cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
 5546 root      15   0     0    0     0 SW    0.2  0.0   5:22   0 nfsd
 5553 root      15   0     0    0     0 SW    0.2  0.0   5:34   1 nfsd
 5552 root      15   0     0    0     0 SW    0.1  0.0   5:15   1 nfsd
20119 root      15   0  1092 1092   888 S     0.1  0.0   0:01   0 top
20265 root      15   0  1088 1088   884 R     0.1  0.0   0:00   1 top
    1 root      15   0   300  300   240 S     0.0  0.0   0:05   1 init
    2 root      RT   0     0    0     0 SW    0.0  0.0   0:00   0 migration/0
    3 root      RT   0     0    0     0 SW    0.0  0.0   0:00   1 migration/1
    4 root      15   0     0    0     0 SW    0.0  0.0   0:05   1 keventd
    5 root      34  19     0    0     0 SWN   0.0  0.0   0:00   0 ksoftirqd/0
    6 root      34  19     0    0     0 SWN   0.0  0.0   0:00   1 ksoftirqd/1
    9 root      15   0     0    0     0 SW    0.0  0.0   0:03   1 bdflush
    7 root      15   0     0    0     0 SW    0.0  0.0   2:19   1 kswapd
    8 root      15   0     0    0     0 SW    0.0  0.0   0:00   0 kscand
   10 root      15   0     0    0     0 SW    0.0  0.0   0:37   0 kupdated
   11 root      25   0     0    0     0 SW    0.0  0.0   0:00   0 mdrecoveryd
   18 root      25   0     0    0     0 SW    0.0  0.0   0:00   0 scsi_eh_0
   21 root      15   0     0    0     0 SW    0.0  0.0   0:03   1 kjournald
   76 root      15   0     0    0     0 SW    0.0  0.0   0:00   0 khubd
  667 root      15   0     0    0     0 SW    0.0  0.0   1:00   0 kjournald
  668 root      15   0     0    0     0 SW    0.0  0.0   0:21   1 kjournald
  669 root      15   0     0    0     0 SW    0.0  0.0   2:22   1 kjournald
  670 root      25   0     0    0     0 SW    0.0  0.0   0:00   1 kjournald
  671 root      15   0     0    0     0 SW    0.0  0.0   0:50   0 kjournald
  672 root      15   0     0    0     0 SW    0.0  0.0   0:00   1 kjournald
  673 root      15   0     0    0     0 SW    0.0  0.0   0:00   0 kjournald
  674 root      15   0     0    0     0 SW    0.0  0.0   0:00   0 kjournald
  675 root      15   0     0    0     0 SW    0.0  0.0   0:00   1 kjournald
  676 root      15   0     0    0     0 SW    0.0  0.0   0:03   0 kjournald
Steps to Reproduce:
1.
2.
3.
  
Actual results:

load average exceeds 1, though no process is running
Expected results:
load average should be low, and iowait should be near 0%

Additional info:
Comment 1 Jeff Layton 2007-07-17 06:48:24 EDT
*** Bug 219465 has been marked as a duplicate of this bug. ***
Comment 2 Jeff Layton 2007-07-17 06:54:11 EDT
For the CPU iowait == idle. It's simply a (somewhat fuzzy) measure of processes
that are sleeping, waiting for some I/O to come back. I'd like to refer you to
this kbase article:

http://kbase.redhat.com/faq/FAQ_80_5637.shtm

I'm going to close this as NOTABUG, since high I/O wait isn't necessarily a bug
(or even a problem). Please reopen the case if you have evidence to the contrary.

Note You need to log in before you can comment on or make changes to this bug.