Bug 56602 - iostat always shows 100 percent disk utilization
iostat always shows 100 percent disk utilization
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 2.1
Classification: Red Hat
Component: kernel (Show other bugs)
2.1
i686 Linux
medium Severity medium
: ---
: ---
Assigned To: Larry Woodman
Brian Brock
:
: 81393 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2001-11-21 13:41 EST by Idson Tirelli
Modified: 2007-11-30 17:06 EST (History)
8 users (show)

See Also:
Fixed In Version: kernel-2.4.9-e.64
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2007-07-09 03:26:28 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Idson Tirelli 2001-11-21 13:41:10 EST
From Bugzilla Helper:
User-Agent: Mozilla/4.78 [en] (X11; U; Linux 2.4.7-12.3mdk i686)

Description of problem:
Despite the fact the machine is not suffering disk bottlenecks or wait I/O
queue depletion, the iostat command always shows 100 %util.

Below there is a short example given by commands. vmstat here shows that bi
and bo are low enough, and going in bursts, which is also not consistent
with iostat:

db2inst1@tomate4 db2inst1]$ vmstat 1
procs                      memory    swap          io     system        
cpu
 r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us 
sy  id
 4  0  0      0  19052  26112 665044   0   0     3    91   85    69  21  
5  74
 6  2  2      0  12424  26128 665464   0   0     0  1408  643   615  60 
10  31
 9  0  0      0   7296  26176 665908   0   0     0   184  695   631  81 
13   6
 8  0  0      0  10692  26184 666216   0   0     0  1036  748   745  63 
12  25
 9  0  0      0  12976  26184 666332   0   0     0     0  575   637  47 
11  42
 0  0  0      0  18940  26184 666484   0   0     0     0  558   540  40 
10  50
 1  0  3      0  21128  26220 666528   0   0     0  1008  297   338  19  
3  78
 3  0  0      0  20784  26248 666692   0   0     0    60  503   519  38  
4  58
 0  0  0      0  19348  26248 666832   0   0     0     0  386   379  24  
5  70
 0  0  0      0  17604  26248 666832   0   0     0     0  147    67   2  
3  95
 0  0  0      0  17660  26248 667008   0   0     0     0  630   650  58 
11  31
 0  0  1      0  16608  26252 667276   0   0     4  1588  477   363  37  
5  58
 0  0  0      0  20952  26300 667628   0   0     0   324  567   556  43  
4  53
 1  0  0      0  20952  26308 667792   0   0     0   332  427   457  39  
6  55
 1  0  0      0  20952  26308 668004   0   0     0     0  473   493  40  
3  57
 0  0  0      0  19152  26308 668004   0   0     0     0  285   208   8  
2  90
10  0  0      0  18244  26308 668004   0   0     0     0  199    91   5  
2  93
 3  0  0      0  17356  26348 668160   0   0     0   992  608   541  52  
9  39
 0  0  0      0  20496  26356 668272   0   0     8     0  322   314  21  
6  73
 1  0  0      0  20732  26356 668336   0   0     0     0  381   388  20  
6  74
 0  0  0      0  20628  26356 668420   0   0     0     0  385   391  19  
7  74
 0  0  0      0  20528  26356 668452   0   0     0     0  247   203  10  
0  89
 0  0  0      0  20528  26396 668452   0   0     0  1256  241    89   1  
1  98
 0  0  0      0  20432  26408 668476   0   0     0    56  258   228  10  
1  89
 0  0  0      0  20120  26408 668504   0   0     0     0  271   233  10  
4  86
 0  0  0      0  19988  26408 668504   0   0     0     0  150    68   1  
2  97
 0  0  0      0  19728  26408 668544   0   0     0     0  347   340  12  
6  82
 1  0  0      0  19728  26448 668688   0   0     0   172  339   321  22  
2  76
 1  0  0      0  18452  26448 668972   0   0     0     0  451   478  30 
11  59
 0  0  0      0  18452  26448 669008   0   0     0     0  320   272  10  
2  88
 0  0  0      0  18200  26448 669024   0   0     0     0  247   192   6  
4  90
 2  0  0      0  17900  26448 669124   0   0     0     0  324   270   8  
4  88
 0  0  0      0  17844  26488 669144   0   0     0  1444  464   256   4  
2  94
 0  0  0      0  17660  26488 669432   0   0     0     0  514   554  47  
5  49
 0  0  0      0  17588  26500 669452   0   0     0   548  351   277   7  
3  90
 0  0  0      0  17588  26500 669460   0   0     0     0  318   246   3  
1  96
 1  0  0      0  17676  26500 669460   0   0     0     0  304   234   5  
2  93
 0  0  0      0  17392  26540 669460   0   0     0   132  222   173   3  
2  95

[db2inst1@tomate4 db2inst1]$ iostat -x -d 1
Linux 2.4.7-10enterprise (tomate4) 11/21/2001

Device:  rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s avgrq-sz avgqu-sz  
await  svctm  %util
sda        0.92  61.36  0.52 37.18   11.49  796.44    21.43     1.85  
37.58  25.18   9.49
sda1       0.00   0.00  0.00  0.00    0.01    0.00     7.38     0.00 
362.87 237.13   0.00
sda2       0.05   1.79  0.04  0.97    0.75   22.11    22.58     1.86
1835.70 571.12   5.78
sda3       0.14   0.75  0.11  0.65    2.00   11.18    17.44     0.86
1140.41 698.86   5.28
sda5       0.00   0.00  0.00  0.00    0.00    0.00     8.00     0.00  
50.00  50.00   0.00
sda6       0.05   0.28  0.01  0.54    0.55    6.59    12.86     0.52 
934.66 859.02   4.77
sda8       0.04   0.86  0.01  0.59    0.35   11.62    19.91     0.62
1033.37 782.39   4.70
sda9       0.64  57.67  0.34 34.43    7.83  744.94    21.65     1.00  
54.52  29.54  10.27

Device:  rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s avgrq-sz avgqu-sz  
await  svctm  %util
sda        0.00   0.00  0.00  0.00    0.00    0.00     0.00 42949422.96   
0.00   0.00 100.00
sda1       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda2       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda3       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda5       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda6       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda8       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda9       0.00   0.00  0.00  0.00    0.00    0.00     0.00 42949542.96   
0.00   0.00 100.00

Device:  rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s avgrq-sz avgqu-sz  
await  svctm  %util
sda        0.00   7.00  0.00  2.00    0.00   72.00    36.00 42949422.96   
0.00 5000.00 100.00
sda1       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda2       0.00   7.00  0.00  2.00    0.00   72.00    36.00     0.00   
0.00   0.00   0.00
sda3       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda5       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda6       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda8       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda9       0.00   0.00  0.00  0.00    0.00    0.00     0.00 42949542.96   
0.00   0.00 100.00

Device:  rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s avgrq-sz avgqu-sz  
await  svctm  %util
sda        0.00 102.00  0.00 36.00    0.00 1112.00    30.89 42949436.66 
380.56 277.78 100.00
sda1       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda2       0.00   9.00  0.00  7.00    0.00  128.00    18.29     4.10 
585.71 128.57   9.00
sda3       0.00   9.00  0.00  2.00    0.00   88.00    44.00     0.00   
0.00   0.00   0.00
sda5       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda6       0.00   1.00  0.00  2.00    0.00   24.00    12.00     0.00   
0.00   0.00   0.00
sda8       0.00   2.00  0.00  2.00    0.00   32.00    16.00     0.00   
0.00   0.00   0.00
sda9       0.00  81.00  0.00 23.00    0.00  840.00    36.52 42949552.56 
417.39 434.78 100.00

Device:  rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s avgrq-sz avgqu-sz  
await  svctm  %util
sda        0.00   0.00  0.00  0.00    0.00    0.00     0.00 42949422.96   
0.00   0.00 100.00
sda1       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda2       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda3       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda5       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda6       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda8       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda9       0.00   0.00  0.00  0.00    0.00    0.00     0.00 42949542.96   
0.00   0.00 100.00

Device:  rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s avgrq-sz avgqu-sz  
await  svctm  %util
sda        0.00   0.00  0.00  0.00    0.00    0.00     0.00 42949422.96   
0.00   0.00 100.00
sda1       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda2       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda3       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda5       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda6       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda8       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda9       0.00   0.00  0.00  0.00    0.00    0.00     0.00 42949542.96   
0.00   0.00 100.00

Device:  rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s avgrq-sz avgqu-sz  
await  svctm  %util
sda        0.00   0.00  0.00  0.00    0.00    0.00     0.00 42949422.96   
0.00   0.00 100.00
sda1       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda2       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda3       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda5       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda6       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda8       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda9       0.00   0.00  0.00  0.00    0.00    0.00     0.00 42949542.96   
0.00   0.00 100.00

Device:  rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s avgrq-sz avgqu-sz  
await  svctm  %util
sda        0.00   0.00  0.00  0.00    0.00    0.00     0.00 42949422.96   
0.00   0.00 100.00
sda1       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda2       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda3       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda5       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda6       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda8       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda9       0.00   0.00  0.00  0.00    0.00    0.00     0.00 42949542.96   
0.00   0.00 100.00

Device:  rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s avgrq-sz avgqu-sz  
await  svctm  %util
sda        0.00 131.00  0.00 53.00    0.00 1472.00    27.77 42949464.76 
788.68 186.79  99.00
sda1       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda2       0.00   7.00  0.00  2.00    0.00   72.00    36.00     2.00
1000.00 1000.00  20.00
sda3       0.00   2.00  0.00  9.00    0.00   88.00     9.78    14.10
1566.67 244.44  22.00
sda5       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
0.00   0.00   0.00
sda6       0.00   1.00  0.00  2.00    0.00   24.00    12.00     2.00
1000.00 1000.00  20.00
sda8       0.00   2.00  0.00  2.00    0.00   32.00    16.00     2.00
1000.00 1000.00  20.00
sda9       0.00 119.00  0.00 38.00    0.00 1256.00    33.05 42949564.66 
571.05 263.16 100.00


[db2inst1@tomate4 db2inst1]$ 

 

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Red Hat Linux 7.2 in a dual Pentium III, 1GHz, 1.7 GB RAM with one SCSI
disk.
2. Run a common database application such as mysql or db2.
3. Run vmstat, then run iostat. They don't match.


Actual Results:  100 disk percent utilization shown by iostat.

Expected Results:  irregular and less than 100% disk utilization should be
shown.

Additional info:
Comment 1 Trond Eivind Glomsrxd 2001-12-11 16:11:02 EST
Can you try the errata kernel?
Comment 2 Need Real Name 2001-12-12 17:31:43 EST
I am seeing the same problem on a 2.4.7-10smp kernel with SCSI disks,
and dual 1.25 GHz Pentium IIIs. Once the disk utilization reaches 100%
is stays there and does not come down even with no I/O activity.
Comment 3 Trond Eivind Glomsrxd 2001-12-12 17:36:15 EST
Can you see if this is also a problem with the errata kernels? iostat doesn't do
much more than parse a few fields in proc...
Comment 4 Trond Eivind Glomsrxd 2001-12-12 18:43:07 EST
Also, a newer sysstat is available from http://people.redhat.com/teg/sysstat/

If the newer kernel doesn't help, this one might.
Comment 5 Need Real Name 2002-01-14 18:05:05 EST
hi,
I'm not sure which patch to use for the errata kernel. We traced the problem 
to /proc/partitions (the "use" variable) - it shows a lot of activity even when 
the disk is idle. The average queue size is also wrong. We are running the 
latest version of sysstat4.0.2 (on RH7.2 with 2.4.7-10smp)

Device:  rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s avgrq-sz avgqu-sz   await 
svctm  %util
sda        0.00   0.00  0.00  0.00    0.00    0.00     0.00 8589874.59    
0.00   0.00 100.00


We could not reproduce this problem on the uni-processor version - so this 
looks like an smp bug.

Thanks,
nawaaz

Comment 6 Trond Eivind Glomsrxd 2002-01-14 18:09:14 EST
Can you try the errata kernel? 2.4.7-10smp is not current.
Comment 7 Need Real Name 2002-01-15 16:18:58 EST
Thanks! The errata kernel has fixed this problem.

Nawaaz
Comment 8 Need Real Name 2002-01-15 20:53:34 EST
Seems like I have spoken too soon - the problem is still there. The problem has 
moved from device sda to sdb and sdc.

vmstat output
-------------

 r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us  sy  id
 0  0  0 1237188 414520  53440 2704792   0   0     0    10  107    17   0   0 
100
 0  0  0 1237188 414516  53440 2704792   0   0     0     2  106    15   0   0 
100
 0  0  0 1237188 414512  53440 2704792   0   0     0     2  106    15   0   0 
100

iostat output
-------------
Device:  rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s avgrq-sz avgqu-sz   await  
svctm  %util
sda        0.00   0.20  0.00  0.40    0.00    4.80    12.00     0.00    0.00   
0.00   0.00
sdb        0.00   0.00  0.00  0.00    0.00    0.00     0.00    60.00    0.00   
0.00 100.00
sdc        0.00   0.00  0.00  0.00    0.00    0.00     0.00    50.00    0.00   
0.00 100.00
sdd        0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00    0.00   
0.00   0.00
Comment 9 Need Real Name 2002-01-22 17:11:29 EST
Hi,
I've found the problem in 2.4.9-13. There exists a race condition in 
drivers/scsi/scsi_lib.c (where the io statistics counters are accessed). 
Basically, req_finished_io() is called without the io_request_lock held. 

I'm including a patch for the problem.

------------------------------------------------------------
diff -ruN linux-2.4.9-13/drivers/scsi/scsi_lib.c linux-2.4.9-
13.patched/drivers/scsi/scsi_lib.c
--- linux-2.4.9-13/drivers/scsi/scsi_lib.c	Tue Oct 30 16:02:21 2001
+++ linux-2.4.9-13.patched/drivers/scsi/scsi_lib.c	Tue Jan 22 13:04:32 2002
@@ -426,7 +426,9 @@
 	if (req->waiting != NULL) {
 		complete(req->waiting);
 	}
+	spin_lock_irq(&io_request_lock);
 	req_finished_io(req);
+	spin_unlock_irq(&io_request_lock);
 	add_blkdev_randomness(MAJOR(req->rq_dev));
 
        SDpnt = SCpnt->device;
Comment 10 Trond Eivind Glomsrxd 2002-01-22 17:22:24 EST
Doug, do you have any comments on this patch?
Comment 11 Doug Ledford 2002-01-22 17:33:26 EST
Ugh...I don't like the idea of grabbing a spin lock there just to futz with a
couple counters.  I'd be much happier switching the entire accounting code to
use atomic_{inc,dec} than grabbing a spin lock in that particular area (not to
mention that it would conflict with my iorl patch).  I'll look into fixing it in
our tree and see what would actually work best (for instance, I don't know that
the SCSI code is the only place with this problem, so fixing it at ll_rw_block
might be a saner course of action).
Comment 12 Tobias Meier 2002-11-28 09:29:18 EST
The Problem still exists with the newest 7.2 kernel:
---snip---
root@lnxc-551:~ >uname -a
Linux lnxc-551 2.4.18-18.7.xsmp #1 SMP Wed Nov 13 19:01:42 EST 2002 i686 unknown

root@lnxc-551:~ >iostat -x 5
Linux 2.4.18-18.7.xsmp (lnxc-551)       11/28/02

avg-cpu:  %user   %nice    %sys   %idle
           1.91    0.04    2.01   96.04

Device:  rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s avgrq-sz avgqu-sz   await 
svctm  %util
sda        1.17  41.50  0.23 17.05   11.19  469.90    27.83     0.25   20.52  
7.20   1.24
sda1       0.00   0.00  0.00  0.00    0.00    0.00    11.83     0.00  645.19
463.24   0.00
sda2       0.00   0.00  0.00  0.00    0.00    0.00     8.00     0.00   45.00 
45.00   0.00
sda3       0.00   0.03  0.01  0.03    0.07    0.45    13.91     0.06 1511.47
847.59   0.32
sda5       0.02   0.47  0.01  0.99    0.19   11.70    11.94     0.28  277.51 
75.73   0.75
sda6       0.02   0.47  0.02  0.31    0.29    6.32    19.71     0.15  457.99
556.49   1.87
sda7       0.02   1.40  0.03  1.09    0.36   20.15    18.30     0.18  161.97
243.65   2.73
sda8       0.00   0.07  0.00  0.06    0.00    1.06    17.45     0.11 1874.58
829.12   0.51
sda9       0.00   0.05  0.00  0.01    0.00    0.12    16.61     0.01 1764.83
759.42   0.05
sda10      1.10  19.79  0.13  7.40    9.86  218.25    30.30     0.11   45.20  
1.43   0.11
sda11      0.02  19.22  0.04  7.17    0.42  211.85    29.48     0.20   27.80 
26.15   1.88

avg-cpu:  %user   %nice    %sys   %idle
           1.60    0.00    2.50   95.90

Device:  rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s avgrq-sz avgqu-sz   await 
svctm  %util
sda        0.00  34.20  0.00 13.80    0.00  384.00    27.83 8589253.42 1364.06
724.64 100.00
sda1       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00    0.00  
0.00   0.00
sda2       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00    0.00  
0.00   0.00
sda3       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00    0.00  
0.00   0.00
sda5       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00    0.00  
0.00   0.00
sda6       0.00   0.80  0.00  0.40    0.00    9.60    24.00     0.48 1200.00
1200.00   4.80
sda7       0.00   1.40  0.00  0.40    0.00   14.40    36.00     0.49 1230.00
1230.00   4.92
sda8       0.00   0.20  0.00  0.40    0.00    4.80    12.00     0.46 1145.00
1140.00   4.56
sda9       0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00    0.00  
0.00   0.00
sda10      0.00   8.20  0.00  8.40    0.00  132.80    15.81    11.09 1320.24 
60.48   5.08
sda11      0.00  23.60  0.00  4.20    0.00  222.40    52.95     6.30 1500.95
129.05   5.42
root@lnxc-551:~ >cat /proc/partitions
major minor  #blocks  name     rio rmerge rsect ruse wio wmerge wsect wuse
running use aveq

   8     0   35532800 sda 59149 300895 2871916 3272449 4377940 10652007
120613560 6101409 -70 3237240 3631296
   8     1      48163 sda1 79 507 1172 816 29 24 106 6152 0 5003 6968
   8     2    1847475 sda2 2 0 16 90 0 0 0 0 9 9
   8     3    4192965 sda3 1882 404 17802 14099 7691 6729 115352 1432835 0
811394 1446935
   8     4          1 sda4 0 0 0 0 0 0 0 0 0 0 0
   8     5    2096451 sda5 2310 3864 48906 112343 253144 121925 3001880 6983022
0 1937250 7095313
   8     6    2096451 sda6 5273 4000 73698 50179 80833 120011 1623864 3900911 0
4792357 3951116
   8     7    2096451 sda7 6739 4831 92554 29726 280885 359279 5172072 4639864 0
7009729 4669657
   8     8    1052226 sda8 104 102 1162 2230 15562 18459 272144 2934876 0
1299613 2937103
   8     9     265041 sda9 45 40 170 1595 1780 13279 30142 320486 0 138595 322083
   8    10   10241406 sda10 33477 282718 2529242 2608027 1898900 5078413
56020888 6254845 0 280653 2970317
   8    11   10241406 sda11 9197 4194 106642 452947 1839116 4933888 54377112
4794363 0 4837483 5253181

---snip---
Comment 13 Patrick C. F. Ernzer 2002-11-28 09:45:11 EST
Tobias forgot to reopen, doing so now.
Comment 14 Kai 'wusel' Siering 2003-03-28 16:27:35 EST
The problems -- wrong counters for /proc/partitions -- still exists with the
current BETA (8.0.94) as well as current Advanced Server 2.1:

root@death:~ # cat /etc/redhat-release ; uname -a ; cat /proc/partitions
Red Hat Linux release 8.0.94 (Phoebe)
Linux death 2.4.20-2.48.PD #5 Fri Feb 28 22:16:43 CET 2003 i686 athlon i386
GNU/Linux
major minor  #blocks  name     rio rmerge rsect ruse wio wmerge wsect wuse
running use aveq
 
  33     0   97685784 hde 379517 285135 5244332 2108070 633082 1856022 19863840
7668340 -11 4807339 42846229
  33     1     136521 hde1 85 213 596 750 50 43 186 1450 0 1990 2200
  33     2   20482875 hde2 143018 9395 1163162 608570 20663 56556 568798 3654830
0 644290 4269680
  33     3    9221310 hde3 233243 273056 4050762 1467690 609521 1792086 19220264
3608720 0 1253920 5086690
  33     4          1 hde4 0 0 0 0 0 0 0 0 0 0 0
  33     5   17470656 hde5 173 450 2170 1740 179 330 3286 20520 0 6750 22260
  33     6   50371776 hde6 2995 2012 27618 29300 2669 7007 71306 382820 0 41740
412120
  33    64   80043264 hdf 23167 24230 330250 239730 32914 193568 1796858 2604670
-12 4811729 31087419
  33    65   80035798 hdf1 23166 24227 330242 239720 32914 193568 1796858
2604670 0 253950 2844420
  22     0   39082680 hdc 3386 2112 30700 16310 1782 4406 45838 26090 -11
4811929 33099529
  22     1     136521 hdc1 11 49 120 110 0 0 0 0 0 110 110
  22     2     530145 hdc2 13 41 168 60 0 0 0 0 0 60 60
  22     3    5124735 hdc3 19 105 248 100 0 0 0 0 0 100 100
  22     4          1 hdc4 0 0 0 0 0 0 0 0 0 0 0
  22     5    8201151 hdc5 11 49 120 80 0 0 0 0 0 80 80
  22     6    5646816 hdc6 11 49 120 80 0 0 0 0 0 80 80
  22     7   11382021 hdc7 1574 1198 14730 9960 1421 3627 37540 25230 0 11860 35190
  22     8    8056566 hdc8 1741 603 15146 5850 361 779 8298 860 0 4980 6710
   3     0   29316672 hda 16627 8922 198386 40190 15208 111538 1015452 12315800
-13 4806479 35830540
   3     1    1542208 hda1 11 49 120 140 0 0 0 0 0 140 140
   3     2    5630782 hda2 11 49 120 60 0 0 0 0 0 60 60
   3     3          1 hda3 0 0 0 0 0 0 0 0 0 0 0
   3     4     530113 hda4 19 105 248 100 0 0 0 0 0 100 100
   3     5     530113 hda5 13008 383 106864 22970 14630 109944 998584 12304610 0
7651430 12342110
   3     6   11365956 hda6 3554 8216 90746 16770 578 1594 16868 11190 0 17670 27960
   3     7   10241406 hda7 19 105 248 110 0 0 0 0 0 110 110


root@lnxh-038:~ # cat /etc/redhat-release ; uname -a ; cat /proc/partitions
Red Hat Linux Advanced Server release 2.1AS (Pensacola)
Linux lnxh-038 2.4.9-e.3smp #1 SMP Fri May 3 16:48:54 EDT 2002 i686 unknown
major minor  #blocks  name     rio rmerge rsect ruse wio wmerge wsect wuse
running use aveq

   8     0    8885632 sda 12229 4774 131374 38490 86902 59058 1137540 109830 -1
88963720 51341430
   8     1      48163 sda1 94 458 1104 130 29 23 104 120 0 190 250
   8     2    1847475 sda2 5 0 34 50 0 0 0 0 0 50 50
   8     3    2096482 sda3 18 10 110 30 14 6 136 50 0 80 80
   8     4          1 sda4 1 0 2 10 0 0 0 0 0 10 10
   8     5    2096451 sda5 7606 1513 72838 16790 11732 13777 204488 21550 0
20410 38340
   8     6    1052226 sda6 1104 1460 20416 7250 53594 16704 562512 40560 1
26324430 26349110
   8     7    1052226 sda7 3334 1235 36540 13770 18768 26193 360056 45380 0
19380 59150
   8     8     265041 sda8 36 68 208 340 2765 2355 10244 2180 0 1730 2520
   8    16  177728640 sdb 63 93 540 3650 49511 2804833 5710178 127448870 1
30567830 158006940
   8    17   10241406 sdb1 36 66 354 80 4141 189587 387922 299320 0 4220 299400
   8    18  167485657 sdb2 16 3 116 3530 45369 2615246 5322254 127149550 0 73340
127153110
   8    32  177728640 sdc 54 27 354 4260 51715 2802901 5710076 123036000 -3
31231650 29457250
   8    33   10241406 sdc1 8 0 46 40 6192 187542 387738 10500 -1 31231610 -31221200
   8    34  167485657 sdc2 33 3 234 4200 45521 2615359 5322334 123025510 -2
31219620 60651770


Could you please finally disable /proc/partitions or fix that long-standing bug?
Thanks,
-kai

-- 
The views expressed here are not neccessarily those of any employer.
 
"Usenet seems to run much like the Kif (or, for the TV generation, Klingon)
 high command. Whoever takes action and can be heard wins."
                                     -- Peter da Silva <peter@ferranti.com>
Comment 15 Rhett Butler 2003-06-19 17:35:00 EDT
See Comment #8 in Bug #63977
Comment 16 Larry Woodman 2005-04-04 10:55:07 EDT
I found and fixed this problem in RHEL2.1

There are bugs reported against various utilities(mostly iostat) where various
numbers dont make any sence.  The problem ended up being the /proc/partitions
output has negative numbers in the hd->ios_in_flight field(3rd from the end)
whenever you have ide disks.  This was caused by the ioctl path not calling
req_new_io(which increments hd->ios_in_flight) yet the completion interrupt
handler callsreq_finished_io(which decrements hd->ios_in_flight).  This resulted
in hd->ios_in_flight going further and further negative after each ioctl.  The
fix is to only have req_finished_io() do this accounting(decrement
hd->ios_in_flight) for READ or WRITE commands since thats the only time
hd->ios_in_flight is incremented(via __make_request).


As you can see the running(3rd field from the end goes negative

-------------cat /proc/partitons without the patch------------
3     0  2818150 hda ... 24507465 4294962326 33537177 31145605

-------------cat /proc/partitions with patch------------------
3     0   20005650 hda ... 2818150 0 80550 2913510

This patch fixes several Bugs and Issues reported against several redhat releases.




--- linux/drivers/block/ll_rw_blk.c.orig
+++ linux/drivers/block/ll_rw_blk.c
@@ -826,11 +826,14 @@ void req_new_io(struct request *req, int
 void req_finished_io(struct request *req)
 {
 	struct hd_struct *hd1, *hd2;
-	locate_hd_struct(req, &hd1, &hd2);
-	if (hd1)
-		account_io_end(hd1, req);
-	if (hd2)	
-		account_io_end(hd2, req);
+
+	if ((req->cmd == READ) || (req->cmd == WRITE)) {
+		locate_hd_struct(req, &hd1, &hd2);
+		if (hd1)
+			account_io_end(hd1, req);
+		if (hd2)	
+			account_io_end(hd2, req);
+	}
 }
 
 /*
Comment 17 Don Howard 2007-07-09 03:50:09 EDT
*** Bug 81393 has been marked as a duplicate of this bug. ***

Note You need to log in before you can comment on or make changes to this bug.