Bug 19153 - "iostat" seems broken
"iostat" seems broken
Product: Red Hat Linux
Classification: Retired
Component: sysstat (Show other bugs)
i386 Linux
medium Severity medium
: ---
: ---
Assigned To: Preston Brown
David Lawrence
Depends On:
  Show dependency treegraph
Reported: 2000-10-15 18:29 EDT by Chris Evans
Modified: 2007-04-18 12:29 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2001-02-15 19:10:32 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Chris Evans 2000-10-15 18:29:49 EDT
(Using the latest RH7.0 update sysstat RPM)

iostat seems broken.
The command "iostat 1" yields some very strange results.

1) The %iowait field seems inverted. That is to say, when my disk is
totally idle, this
registers at 100%. When largely idle (playing an mp3), it registers at
about 98%.
I would expect figures of 0% and 2% respectively, to indicate the system is
not heavily
waiting on I/O!!

2) Here is an output fragment

Disks:         tps    Kb_read/s    Kb_wrtn/s    Kb_read    Kb_wrtn
hdisk0        0.00         0.00         0.00          0          0
hdisk1        0.00         0.00         0.00          0          0
hdisk2        0.00         0.00         0.00          0          0
hdisk3        0.00         0.00         0.00          0          0

(I only have one physical disk)
When I load up the disk, for example with the command "find /", the "tps"
field registers figures of around 40. However, very concerningly, the other
4 fields remain at 0.00 or 0.

One more comment - the kernel disk accounting patch exposes "average disk
queue depth", it would be very nice if the iostat program were also able to
report it.

cc: to Stephen because this could be a missing or incorrect version of the
userland iostat patch. The kernel patch seems fine, looking at

I'm happy to test things as always.
Comment 1 Chris Evans 2000-10-16 15:12:25 EDT
Hmm, I just found another version of iostat, hidden at

It seems to be totally different?

BUT, it seems to work correctly and offers the following beautiful statistic:
"average request service time".
         hda          hda1          hda2          hda3          hda4         
hda5          cpu
k/s t/s serv  k/s t/s serv  k/s t/s serv  k/s t/s serv  k/s t/s serv  k/s t/s
serv   us  sy  id
13624 107 36.6    0   0  0.0    0   0  0.0    0   0  0.0    0   0  0.0    0   0 
0.0    2  21  76
14590 116 37.0    0   0  0.0   16   0  0.0    0   0  0.0    0   0  0.0    0   0 
0.0    3  19  78

nifty, eh? That's 14Mb/sec and ~100 requests per second at ~40ms service time
It's generated by "dd" from /dev/hda to /dev/null with 1024Mb blocksize.

Ideally, I'd like to see an iostat which:
a) Works
b) Offers the above "average service time in ms" statistic
c) Offers the %iowait statistic
d) Ideally would offer the average queue depth statistic
e) Offers the standard "kb/sec" and "req/sec" statistics

Unfortunately, that would seem to require a combination of the two different
Comment 2 Chris Evans 2000-10-16 20:44:29 EDT
OK!! These are indeed two different "iostat" programs.
Unfortunately, it seems that RH7.0 ships with the wrong one.
Playing with the one I mention above at ftp.uk.linux.org, it _does_ seem to
satisfy all the requirements I list above!

Stephen - can you point me to where the iostat.c file came from? I'd like to fix
a few bugs/uglies, and I'd like to base my work on the most recent version!

I'd suggest that this might warrant an update once the proper iostat.c has
been prettified.
Comment 3 Stephen Tweedie 2000-10-17 10:09:26 EDT
I've already fixed a couple of the iostat/systat versions out there for the
cleaned-up sard output, and I'll do the necessary for this one once I'm back in
the UK next week.
Comment 4 Chris Evans 2000-10-17 11:56:06 EDT
Can I volunteer to review the fixed packages?
Comment 5 Derek Tattersall 2001-01-10 15:47:23 EST
iostat  from sysstat-3.3.3-2 from the RHL7.1 beta2 displays no IO activity for
the following command:
dd if=/dev/sda5 of=/dev/null bs=72k, and in fact iostat freezes and no longer
updates the display.
Comment 6 Chris Evans 2001-01-10 16:59:43 EST
The problem is now two fold
1) The 2.4 kernel hasn't been patched with the enhance i/o statistics patch from
2.2 yet
- This needs doing, or you've screwed people relying on RH7.0 advanced
2) The default iostat program does not expose the cool enhanced statistics
- The alternative iostat I quote above is better in this regard.
Comment 7 Preston Brown 2001-01-17 12:59:56 EST
the new iostat is not broken anymore, but it doesn't do as much as the iostat
you reference.  However, it is maintained and works well in other ways.

I have forwarded on the iostat.c file you referenced to the maintainer of the
version we are currently shipping so that he may merge the two.
Comment 8 Chris Evans 2001-02-15 19:10:28 EST
I just spotted something very interesting on
comp.os.linux.announce. It's a new iostat version.
In the author's own words:
There are two interesting things coming with sysstat-3.3.5:
1) The iostat command has been greatly improved and now takes full
advantage of Stephen Tweedie's kernel patch to display extended I/O

However, also note (also from the author):
Please note that version 3.3.5 is a development release. The latest
stable version is still 3.2.4.
80kB  sysstat-3.3.5.tar.gz

Comment 9 Preston Brown 2001-03-05 16:24:36 EST
we are up to this version in rawhide, and it appears very stable.  I've been
cooperating with the author.
Comment 10 Chris Evans 2001-03-08 15:24:14 EST
Nice one. Wolverine has this version, it seems.
One nitpick: the %util value is scaled incorrectly - it ranges from 0% to 1000%,
i.e. a factor of 10 out.
I'm not re-opening the bug for such a minor point, but it would be nice to
get it fixed.
Comment 11 Stephen Tweedie 2001-03-09 12:53:55 EST
The ticks output in current sard patches is biased to output 1000 ticks per
second: in other words, it is no longer dependent on "HZ".  This means that the
same parser will work correctly for both Intel and for architectures such as
Alpha where HZ=1000.
Comment 12 Need Real Name 2002-04-06 14:19:04 EST
A guy pointed me that avg wait times and service times as displayed
by 'iostat -x' were wrong. On his RedHat system running 'iostat -x 10',
he gets values of about 200-400 ms and higher. It seems to be too high
by an order of magnitude, since the SCSI/Fc controller in his Compaq
and IBM machines are musch faster, there was I/O load contention, etc.

He sent me a very small patch (below) to fix this but that I am unable
to integrate in sysstat because I lack knowledge on the way kernel and
sct's patch work.
Could you tell me if this patch is acceptable and if I can apply it to sysstat?

--- sysstat-4.0.3-orig/iostat.c Fri Feb 11 14:15:19 2002
+++ sysstat-4.0.3/iostat.c      Thu Feb 14 11:30:05 2002
@@ -372,7 +372,8 @@
               tput   = nr_ios * HZ / itv;
               util   = ((double) current.ticks) / itv;
               svctm  = tput ? util / tput : 0.0;
-              await  = nr_ios ? (current.rd_ticks + current.wr_ticks) / nr_ios
* 1000.0 / HZ : 0.0;
+              /* kernel gives ticks already in milliseconds for all platforms
-> no need for further scaling */
+              await  = nr_ios ? (current.rd_ticks + current.wr_ticks) / nr_ios
: 0.0;
               arqsz  = nr_ios ? (current.rd_sectors + current.wr_sectors) /
nr_ios : 0.0;
               printf("/dev/%-5s", disk_hdr_stats[disk_index].name);
@@ -387,7 +388,8 @@
                      ((double) current.aveq) / itv,
-                     svctm * 1000.0,
+                     /* again: ticks in milliseconds */
+                     svctm * 100.0,
                      /* NB: the ticks output in current sard patches is biased
to output 1000 ticks per second */
                      util * 10.0);

Problem concerns every platforms with recent RedHat and sysstat installed.
Thx a lot for your help.

Note You need to log in before you can comment on or make changes to this bug.