This service will be undergoing maintenance at 00:00 UTC, 2016-09-28. It is expected to last about 1 hours
Bug 113403 - iostat reports abnormally large avgqu-sz
iostat reports abnormally large avgqu-sz
Status: CLOSED WONTFIX
Product: Red Hat Linux
Classification: Retired
Component: kernel (Show other bugs)
9
i386 Linux
low Severity low
: ---
: ---
Assigned To: Arjan van de Ven
Brian Brock
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2004-01-13 11:37 EST by Jim Laverty
Modified: 2005-10-31 17:00 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2004-09-30 11:41:46 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
Patch to fix negative stats in /proc/partitions (1.57 KB, patch)
2004-02-16 20:06 EST, Philip Pokorny
no flags Details | Diff
patch to solve incorrect data reported from iostat under heavy merging (2.61 KB, patch)
2004-02-23 12:38 EST, Jeremy McNicoll
no flags Details | Diff

  None (edit)
Description Jim Laverty 2004-01-13 11:37:18 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.6b)
Gecko/20031208

Description of problem:
The iostat util reports abnormally large avgqu-sz on IDE drive based
systems (Refer to old bugzilla bug ID 78749). 

This issue still exists in Red Hat 9 using sysstat-4.0.7-3 (kernel
2.4.20-20.9smp) using iostat.  After further testing on three (3) IDE
based systems and four (4) different SCSI based systems, the problem
seems to only appear on IDE drive based systems.

The crazy inode sizes in sar ('sar -v') which are refered to in bug ID
78749, seem to have stopped however with this release.



Version-Release number of selected component (if applicable):
sysstat-4.0.7-3

How reproducible:
Always

Steps to Reproduce:
1.  Run 'iostat -x 1 100' 
2.  
3.
    

Actual Results:  
avg-cpu:  %user   %nice    %sys   %idle
           0.50    0.00    0.25   99.25
Very large avgqu-sz (42949652.96) on an mostly idle server:

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     0.00  17.00  0.00  5.00    0.00  176.00     0.00    88.00
   35.20 42949652.96    0.00 200.00 100.00


Expected Results:  See realistic avgqu-sz for performance metrics.

Additional info:

iostat executed on a IDE based system shows:
--------------------------------------------


avg-cpu:  %user   %nice    %sys   %idle
           0.50    0.00    0.25   99.25

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     0.00  17.00  0.00  5.00    0.00  176.00     0.00    88.00
   35.20 42949652.96    0.00 200.00 100.00
/dev/hda1    0.00   6.00  0.00  2.00    0.00   64.00     0.00    32.00
   32.00     0.00    0.00   0.00   0.00
/dev/hda2    0.00  11.00  0.00  3.00    0.00  112.00     0.00    56.00
   37.33     0.00    0.00   0.00   0.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda6    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda7    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice    %sys   %idle
           0.25    0.00    0.25   99.50

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00 42949652.96    0.00   0.00 100.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda2    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda6    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda7    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00

iostat executed on a SCSI based system shows :
----------------------------------------------

avg-cpu:  %user   %nice    %sys   %idle
           0.00    0.00    0.00  100.00

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00 42949652.96    0.00   0.00 100.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda2    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda6    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda7    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice    %sys   %idle
           0.00    0.00    0.00  100.00

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00 42949652.96    0.00   0.00 100.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda2    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda6    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda7    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
Comment 1 Jim Laverty 2004-01-13 13:48:23 EST
Correction for the SCSI stats (which had the IDE results posted in them):

Linux 2.4.20-20.9smp (stout)   01/13/2004

avg-cpu:  %user   %nice    %sys   %idle
           1.72    0.00    0.72   97.56

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/sda     0.09  57.39  0.04 12.54    0.81  559.53     0.40   279.76
   44.55     0.52    4.14   0.39   0.49
/dev/sda1    0.01   0.87  0.01  0.29    0.13    9.33     0.07     4.66
   31.48     0.03   10.82  10.75   0.32
/dev/sda2    0.02   0.21  0.03  0.12    0.33    2.67     0.17     1.34
   20.22     0.03   21.69  21.42   0.32
/dev/sda3    0.07   0.32  0.01  0.39    0.31    5.71     0.15     2.85
   15.20     0.04    9.53   9.19   0.36
/dev/sda5    0.00   0.01  0.00  0.00    0.00    0.06     0.00     0.03
   66.66     0.00    1.31   0.90   0.00
/dev/sda6    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
   14.49     0.00  147.03 147.03   0.00
/dev/sda7    0.00  55.98  0.00 11.73    0.03  541.76     0.02   270.88
   46.18     0.42    3.57   0.33   0.38

avg-cpu:  %user   %nice    %sys   %idle
           5.50    0.00    2.25   92.25

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/sda     0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/sda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/sda2    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/sda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/sda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/sda6    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/sda7    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
Comment 2 Nils Philippsen 2004-01-14 07:35:07 EST
Is there only one bogus value in the sar output, only for the first
sar run after (re)booting?
Comment 3 Jim Laverty 2004-01-29 10:34:07 EST
The bogus output is only showing up in 'iostat'.  It does not show up
with 'sar' with this newer kernel.  Prior to 2.4.20 it showed up in
sar also.

[root@dontbuyscostock root]# iostat -x 1  100
Linux 2.4.20-20.9smp (dontbuyscostock)       01/29/2004

avg-cpu:  %user   %nice    %sys   %idle
          10.54    6.50    0.61   82.36

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     3.10  75.87  0.81 38.22   31.22  912.96    15.61   456.48
   24.19     0.03    0.19   0.81   3.15
/dev/hda1    0.54   0.28  0.08  0.16    4.93    3.54     2.47     1.77
   34.76     0.01    5.88   1.84   0.04
/dev/hda2    1.80   0.43  0.45  0.16   17.99    4.71     8.99     2.35
   37.60     0.09   15.42   1.94   0.12
/dev/hda3    0.74  74.85  0.16 37.63    7.28  900.02     3.64   450.01
   24.01     0.27    0.71   0.54   2.02
/dev/hda5    0.01   0.24  0.00  0.02    0.13    2.08     0.06     1.04
   94.06     0.08  351.01   4.18   0.01
/dev/hda6    0.00   0.07  0.11  0.25    0.90    2.61     0.45     1.31
    9.63     0.28   76.92  30.07   1.10
/dev/hda7    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
   12.47     0.00  267.50  18.23   0.00

avg-cpu:  %user   %nice    %sys   %idle
           1.25    0.00    0.25   98.50

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00 42949652.96    0.00   0.00 100.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda2    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda6    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda7    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice    %sys   %idle
           1.75    0.00    0.25   98.00

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00 42949652.96    0.00   0.00 100.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda2    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda6    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda7    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice    %sys   %idle
           1.25    0.00    0.00   98.75

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00 42949652.96    0.00   0.00 100.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda2    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda6    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda7    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
Comment 4 Jim Laverty 2004-01-29 11:17:33 EST
The 2.4.20-28.9smp kernel produces the same results, using the
sysstat-4.0.7-3 rpm.




Comment 5 Philip Pokorny 2004-02-16 20:05:13 EST
If you check you will probably find negative values in /proc/partitions.

Zlatko Calusic at http://linux.inet.hr/ reports a diskstats patch from
Rick Lindsley (http://linux.inet.hr/diskstats-2.4.patch) fixes this. 
I'll copy that small patch here.
Comment 6 Philip Pokorny 2004-02-16 20:06:34 EST
Created attachment 97719 [details]
Patch to fix negative stats in /proc/partitions

Here is the diskstats patch from http://linux.inet.hr/diskstats-2.4.patch
Comment 7 Jeremy McNicoll 2004-02-23 12:33:23 EST
I have created a patch which includes Ricks work and includes a fix 
for incorrect data reporting from IOStat.  This fix addresses the 
problem of incorrect values reported under high amounts of merges.  
It now adheres to Littles Law 
(http://www.mcnicoll.ca/iostat/theory.html). 

The patch is here. 
(http://www.mcnicoll.ca/iostat/patch_diskstats_24_23) .  There are a 
series of tests and changes I did in order to confirm the validity of 
the numbers.  (http://www.mcnicoll.ca/iostat/results.html)  
Everything seems correct after a large amount of rigourous testing. 

Comment 8 Jeremy McNicoll 2004-02-23 12:38:04 EST
Created attachment 97954 [details]
patch to solve incorrect data reported from iostat under heavy merging
Comment 9 Nils Philippsen 2004-02-23 12:40:45 EST
Apparently this isn't a sysstat bug then -> transfering to kernel
component and reassigning.
Comment 10 Bugzilla owner 2004-09-30 11:41:46 EDT
Thanks for the bug report. However, Red Hat no longer maintains this version of
the product. Please upgrade to the latest version and open a new bug if the problem
persists.

The Fedora Legacy project (http://fedoralegacy.org/) maintains some older releases, 
and if you believe this bug is interesting to them, please report the problem in
the bug tracker at: http://bugzilla.fedora.us/

Note You need to log in before you can comment on or make changes to this bug.