Bug 557062 - Review Request: iotop - Top like utility for I/O
Summary: Review Request: iotop - Top like utility for I/O
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: Package Review
Version: 5.0
Hardware: All
OS: Linux
medium
medium
Target Milestone: beta
: 5.8
Assignee: Dan Horák
QA Contact:
URL:
Whiteboard:
Depends On: 516961 546266
Blocks: 188273 545526
TreeView+ depends on / blocked
 
Reported: 2010-01-20 10:08 UTC by Jiri Olsa
Modified: 2018-11-26 19:43 UTC (History)
30 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-10-10 14:00:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Jiri Olsa 2010-01-20 10:08:27 UTC
Making iotop package available for RHEL5.6.

Comment 1 Jiri Olsa 2010-01-20 16:16:09 UTC
Spec URL: http://people.redhat.com/jolsa/iotop/iotop.spec
SRPM URL: http://people.redhat.com/jolsa/iotop/iotop-0.4-1.src.rpm

Description:
Linux has always been able to show how much I/O was going on
(the bi and bo columns of the vmstat 1 command).
iotop is a Python program with a top like UI used to
show of behalf of which process is the I/O going on.

Comment 2 Dan Horák 2010-02-19 13:55:54 UTC
formal review is here, see the notes explaining OK* and BAD statuses below:

OK      source files match upstream:
            cb4fc9f5e1f2156312ad2bfdb9734682c7bc6af1  iotop-0.4.tar.bz2
OK      package meets naming and versioning guidelines.
OK      specfile is properly named, is cleanly written and uses macros consistently.
OK      dist tag is present.
OK      build root is correct.
OK      license field matches the actual license.
OK      license is open source-compatible (GPLv2). XXX License text included in package.
OK      latest version is being packaged.
OK      BuildRequires are proper.
N/A     compiler flags are appropriate.
OK      %clean is present.
OK      package builds in mock (Rawhide/x86_64).
N/A     debuginfo package looks complete.
BAD     rpmlint is silent.
BAD     final provides and requires look sane.
N/A     %check is present and all tests pass.
OK      no shared libraries are added to the regular linker search paths.
OK      owns the directories it creates.
OK      doesn't own any directories it shouldn't.
OK      no duplicates in %files.
OK      file permissions are appropriate.
OK      no scriptlets present.
OK      code, not content.
OK      documentation is small, so no -docs subpackage is necessary.
OK      %docs are not necessary for the proper functioning of the package.
OK      no headers.
OK      no pkgconfig files.
OK      no libtool .la droppings.
OK      not a GUI app.


- you should drop python from Requires, because it's brought in automagically by "R: python(abi) = 2.4" in the binary rpm
- rpmlint complains a bit
iotop.noarch: W: incoherent-version-in-changelog 0.4.1 ['0.4-1.el5', '0.4-1']
    => s/0.4.1/0.4-1/

The package is APPROVED, but fix these 2 issues during the import into CVS.

Comment 3 Jiri Olsa 2010-02-25 17:06:05 UTC
hi,

I published iotop related changes/rpms on

http://people.redhat.com/jolsa/iotop/

so it's available for use/testing before 5.6 is ready

jirka

Comment 4 dijuremo 2010-06-18 17:45:26 UTC
Downloaded, compiled, installed and when I try to run it I get:

[root@phys41012 tmp]# iotop
Traceback (most recent call last):
  File "/usr/bin/iotop", line 16, in ?
    main()
  File "/usr/lib/python2.4/site-packages/iotop/ui.py", line 547, in main
    main_loop()
  File "/usr/lib/python2.4/site-packages/iotop/ui.py", line 537, in <lambda>
    main_loop = lambda: run_iotop(options)
  File "/usr/lib/python2.4/site-packages/iotop/ui.py", line 452, in run_iotop
    return curses.wrapper(run_iotop_window, options)
  File "/usr/lib64/python2.4/curses/wrapper.py", line 44, in wrapper
    return func(stdscr, *args, **kwds)
  File "/usr/lib/python2.4/site-packages/iotop/ui.py", line 444, in run_iotop_window
    process_list = ProcessList(taskstats_connection, options)
  File "/usr/lib/python2.4/site-packages/iotop/data.py", line 339, in __init__
    self.update_process_counts()
  File "/usr/lib/python2.4/site-packages/iotop/data.py", line 395, in update_process_counts
    stats = self.taskstats_connection.get_single_task_stats(thread)
  File "/usr/lib/python2.4/site-packages/iotop/data.py", line 126, in get_single_task_stats
    reply = self.connection.recv()
  File "/usr/lib/python2.4/site-packages/iotop/netlink.py", line 229, in recv
    raise err
OSError: Netlink error: Invalid argument (22)

[root@phys41012 tmp]# rpm -qa | grep "^python-2"
python-2.4.3-27.el5
[root@phys41012 tmp]# uname -a
Linux phys41012.physics.gatech.edu 2.6.18-194.3.1.el5 #1 SMP Sun May 2 04:17:42 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
[root@phys41012 tmp]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.5 (Tikanga)

Comment 5 Gianluca Cecchi 2010-06-28 15:44:18 UTC
Hello,
you have to run it against the patched kernel....

I have a rh el 5.5 + latest updates on x86_64 system
I installed the kernel and compiled the iotop, while I kept the already provided python-ctypes rpm as now it is in standard rh el 5 packages

[root@orasvi2 ~]# uname -r
2.6.18-164.11.1.el5iotop

[root@orasvi2 ~]# rpm -q python-ctypes
python-ctypes-1.0.2-1.1.el5

[root@orasvi2 ~]# rpm -q iotop
iotop-0.4-1

[root@orasvi2 bin]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 5.5 (Tikanga)

Without I/O load I have

Total DISK READ: 1552.14 K/s | Total DISK WRITE: 0.00 B/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND                                                  
 5870 be/4 root      943.07 K/s    0.00 B/s  0.00 %  5.12 % ./PV2XXA00 -i -x
 4583 be/4 root      341.86 K/s  110.02 K/s  0.00 %  2.53 % perl -w /usr/sbin/collectl -D
   17 rt/3 root        0.00 B/s    0.00 B/s  0.00 %  0.29 % [migration/5]
    1 be/4 root        0.00 B/s    0.00 B/s  5.89 %  0.10 % init [3]
 4218 be/4 haldaemo    0.00 B/s    0.00 B/s  0.00 %  0.00 % hald-addon-keyboard: listening on /dev/input/event2
 3207 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % irqbalance
   11 rt/3 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/3]
 5871 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % grep -i -v ^bus
 4603 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % mingetty tty1
 4605 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % mingetty tty3
   14 rt/3 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/4]
 4363 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % dsm_sa_datamgrd
 4508 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % dsm_sa_snmpd
 4604 be/4 root        0.00 B/s    0.00 B/s  2.53 %  0.00 % mingetty tty2
   12 be/7 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/3]
 2201 be/3 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kmpathd/2]
 3554 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % gpm -m /dev/input/mice -t exps2
 5114 be/4 haldaemo    0.00 B/s    0.00 B/s  0.00 %  0.00 % hald-addon-keyboard: listening on /dev/input/event3
 4606 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % mingetty tty4
    7 rt/3 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdog/1]
   10 rt/3 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdog/2]
   18 be/7 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/5]
   28 be/3 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [events/2]
   29 be/3 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [events/3]


Now I run disktest benchmark on iSCSI volume with multipath enabled:
./disktest -B 96k -h 1 -I BD -K 4 -p l -P T -T 300 -r /dev/mapper/vol3

So I start 4 threads of sequential read for 5 minutes with 96k blocks

iotop -o -d 3 gives:
Total DISK READ: 367.42 M/s | Total DISK WRITE: 71.45 K/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND                                                  
 6996 be/4 root       91.74 M/s    0.00 B/s 97.12 % 97.47 % ./disktest -B 96k -h 1 -I BD ~P T -T 300 -r /dev/mapper/vol3
 6995 be/4 root       91.49 M/s    0.00 B/s 97.25 % 97.40 % ./disktest -B 96k -h 1 -I BD ~P T -T 300 -r /dev/mapper/vol3
 6993 be/4 root       91.46 M/s    0.00 B/s  0.00 % 97.25 % ./disktest -B 96k -h 1 -I BD ~P T -T 300 -r /dev/mapper/vol3
 6994 be/4 root       92.73 M/s    0.00 B/s  0.00 % 97.12 % ./disktest -B 96k -h 1 -I BD ~P T -T 300 -r /dev/mapper/vol3
 6585 be/3 root        0.00 B/s    0.00 B/s  0.00 %  1.12 % [scsi_eh_10]
    8 rt/3 root        0.00 B/s    0.00 B/s  0.00 %  0.86 % [migration/2]
 6577 be/3 root        0.00 B/s    0.00 B/s  0.00 %  0.86 % [scsi_eh_9]
  685 be/3 root        0.00 B/s   47.64 K/s  0.00 %  0.43 % [kjournald]
   14 rt/3 root        0.00 B/s    0.00 B/s  0.00 %  0.36 % [migration/4]
    5 rt/3 root        0.00 B/s    0.00 B/s -0.00 %  0.36 % [migration/1]
   11 rt/3 root        0.00 B/s    0.00 B/s  0.00 %  0.26 % [migration/3]
   23 rt/3 root        0.00 B/s    0.00 B/s  0.00 %  0.13 % [migration/7]
   20 rt/3 root        0.00 B/s    0.00 B/s  0.00 %  0.10 % [migration/6]
 4214 be/4 haldaemo    0.00 B/s    0.00 B/s  0.00 %  0.03 % hald-addon-keyboard: listening on /dev/input/event3

And same results in disktest output:
| 2010/06/28-17:29:30 | STAT  | 6989 | v1.4.2 | /dev/mapper/vol3 | Heartbeat read throughput: 373948416.0B/s (356.62MB/s), IOPS 3804.0/s.
| 2010/06/28-17:29:31 | STAT  | 6989 | v1.4.2 | /dev/mapper/vol3 | Heartbeat read throughput: 390856704.0B/s (372.75MB/s), IOPS 3976.0/s.
| 2010/06/28-17:29:32 | STAT  | 6989 | v1.4.2 | /dev/mapper/vol3 | Heartbeat read throughput: 380534784.0B/s (362.91MB/s), IOPS 3871.0/s.
| 2010/06/28-17:29:33 | STAT  | 6989 | v1.4.2 | /dev/mapper/vol3 | Heartbeat read throughput: 391643136.0B/s (373.50MB/s), IOPS 3984.0/s.
| 2010/06/28-17:29:34 | STAT  | 6989 | v1.4.2 | /dev/mapper/vol3 | Heartbeat read throughput: 378470400.0B/s (360.94MB/s), IOPS 3850.0/s.
| 2010/06/28-17:29:35 | STAT  | 6989 | v1.4.2 | /dev/mapper/vol3 | Heartbeat read throughput: 378077184.0B/s (360.56MB/s), IOPS 3846.0/s.
| 2010/06/28-17:29:36 | STAT  | 6989 | v1.4.2 | /dev/mapper/vol3 | Heartbeat read throughput: 396263424.0B/s (377.91MB/s), IOPS 4031.0/s.
| 2010/06/28-17:29:37 | STAT  | 6989 | v1.4.2 | /dev/mapper/vol3 | Heartbeat read throughput: 374046720.0B/s (356.72MB/s), IOPS 3805.0/s.

Now another one:
./disktest -B 96k  -h 1 -I BD -K 8 -PT -pR -r -w -D75:25 -T 300 /dev/mapper/vol3

So completely random read/write operations with percentage 75/25 between read and write and total 8 threads with 96k block size

iotop gives:
Total DISK READ: 75.34 M/s | Total DISK WRITE: 25.15 M/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND                                                                                           
 7512 be/4 root        9.39 M/s    3.38 M/s 99.70 % 99.85 % ./disktest -B 96k -h 1 -I BD -K 8 -PT -pR -r -w -D75:25 -T 300 /dev/mapper/vol3
 7513 be/4 root        9.46 M/s    2.70 M/s 99.42 % 99.77 % ./disktest -B 96k -h 1 -I BD -K 8 -PT -pR -r -w -D75:25 -T 300 /dev/mapper/vol3
 7515 be/4 root        9.46 M/s    3.01 M/s 99.77 % 99.75 % ./disktest -B 96k -h 1 -I BD -K 8 -PT -pR -r -w -D75:25 -T 300 /dev/mapper/vol3
 7510 be/4 root        9.52 M/s    3.50 M/s  0.00 % 99.70 % ./disktest -B 96k -h 1 -I BD -K 8 -PT -pR -r -w -D75:25 -T 300 /dev/mapper/vol3
 7514 be/4 root        9.18 M/s    3.22 M/s 99.85 % 99.57 % ./disktest -B 96k -h 1 -I BD -K 8 -PT -pR -r -w -D75:25 -T 300 /dev/mapper/vol3
 7511 be/4 root        9.61 M/s    2.76 M/s 99.41 % 99.42 % ./disktest -B 96k -h 1 -I BD -K 8 -PT -pR -r -w -D75:25 -T 300 /dev/mapper/vol3
 7516 be/4 root        9.27 M/s    3.07 M/s 99.57 % 99.41 % ./disktest -B 96k -h 1 -I BD -K 8 -PT -pR -r -w -D75:25 -T 300 /dev/mapper/vol3
 7509 be/4 root        9.46 M/s    3.44 M/s  0.00 % 99.41 % ./disktest -B 96k -h 1 -I BD -K 8 -PT -pR -r -w -D75:25 -T 300 /dev/mapper/vol3
 2277 be/3 root        0.00 B/s    0.00 B/s  0.00 %  1.10 % [kjournald]

disktest output gives:
| 2010/06/28-17:38:51 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat read throughput: 78938112.0B/s (75.28MB/s), IOPS 803.0/s.
| 2010/06/28-17:38:51 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat write throughput: 26247168.0B/s (25.03MB/s), IOPS 267.0/s.
| 2010/06/28-17:38:52 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat read throughput: 80412672.0B/s (76.69MB/s), IOPS 818.0/s.
| 2010/06/28-17:38:52 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat write throughput: 26836992.0B/s (25.59MB/s), IOPS 273.0/s.
| 2010/06/28-17:38:53 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat read throughput: 79429632.0B/s (75.75MB/s), IOPS 808.0/s.
| 2010/06/28-17:38:53 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat write throughput: 26443776.0B/s (25.22MB/s), IOPS 269.0/s.
| 2010/06/28-17:38:54 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat read throughput: 80117760.0B/s (76.41MB/s), IOPS 815.0/s.
| 2010/06/28-17:38:54 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat write throughput: 26738688.0B/s (25.50MB/s), IOPS 272.0/s.
| 2010/06/28-17:38:55 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat read throughput: 79036416.0B/s (75.38MB/s), IOPS 804.0/s.
| 2010/06/28-17:38:55 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat write throughput: 26345472.0B/s (25.12MB/s), IOPS 268.0/s.
| 2010/06/28-17:38:56 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat read throughput: 78839808.0B/s (75.19MB/s), IOPS 802.0/s.
| 2010/06/28-17:38:56 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat write throughput: 26247168.0B/s (25.03MB/s), IOPS 267.0/s.
| 2010/06/28-17:38:57 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat read throughput: 79233024.0B/s (75.56MB/s), IOPS 806.0/s.
| 2010/06/28-17:38:57 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat write throughput: 26345472.0B/s (25.12MB/s), IOPS 268.0/s.
| 2010/06/28-17:38:58 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat read throughput: 78741504.0B/s (75.09MB/s), IOPS 801.0/s.
| 2010/06/28-17:38:58 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat write throughput: 26345472.0B/s (25.12MB/s), IOPS 268.0/s.
| 2010/06/28-17:38:59 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat read throughput: 80117760.0B/s (76.41MB/s), IOPS 815.0/s.
| 2010/06/28-17:38:59 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat write throughput: 26738688.0B/s (25.50MB/s), IOPS 272.0/s.
| 2010/06/28-17:39:00 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat read throughput: 80117760.0B/s (76.41MB/s), IOPS 815.0/s.
| 2010/06/28-17:39:00 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat write throughput: 26640384.0B/s (25.41MB/s), IOPS 271.0/s.
| 2010/06/28-17:39:01 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat read throughput: 77955072.0B/s (74.34MB/s), IOPS 793.0/s.
| 2010/06/28-17:39:01 | STAT  | 7505 | v1.4.2 | /dev/mapper/vol3 | Heartbeat write throughput: 26050560.0B/s (24.84MB/s), IOPS 265.0/s.

So the results seem coherent....
What about a patch for the current kernel, that is kernel-2.6.18-194.3.1.el5?
And to stress inclusion in standard for 5.6 or better before?
Thanks,
Gianluca

Comment 8 Larry Troan 2010-08-03 21:55:29 UTC
Missed 5.6. Do we push to 5.7 or CLOSE=WONTFIX for RHEL5?

Comment 9 Bill McGonigle 2010-08-03 22:08:07 UTC
just FYI, I've been adding the IO columns to htop as a workaround under RHEL5 systems.  That handles most of the process-level issues I come across.  The i/o priorities don't show up under htop like in iotop.

Comment 10 Georgi Hristov 2010-08-15 07:40:29 UTC
What happen? Why did it not make it in 5.6? Is something still outstanding? Let's push it for 5.7 in that case. RHEL5 is here to stay for quite a few years more. Tool such as iotop comes very handy for larger io intensive servers.

Comment 11 Larry Troan 2010-09-02 21:51:33 UTC
No ACKs. No bandwidth to complete for 5.6. Pushing to 5.7 consideration.
Removed NEEDINFO.

Comment 14 Rob K 2010-09-09 00:39:13 UTC
We run a very large regional mirror, and this would be extremely useful to us. If any further testing is needed, I can easily run up a loaded box.

Comment 15 Larry Troan 2010-09-09 14:13:06 UTC
Is this a DUP of bug 545526?

Comment 17 Larry Troan 2011-07-18 08:46:47 UTC
Setting Fujitsu tracker for 5.8 since thia appears to have mised 5.7.

Comment 21 RHEL Program Management 2012-10-10 11:53:13 UTC
Thank you for submitting this issue for consideration. Red Hat Enterprise Linux 5 has reached the end of Production 1 Phase of its Life Cycle.  Red Hat does not plan to incorporate the suggested capability in a future Red Hat Enterprise Linux 5 minor release. If you would like Red  Hat to re-consider this feature request and the requested functionality is not currently in Red Hat Enterprise Linux 6, please re-open the request via appropriate support channels and provide additional supporting details about the importance of this issue.

Comment 22 Markus Falb 2012-10-10 12:15:49 UTC
I am confused about that last statement.
iotop was added in 5.8

Comment 23 Ondrej Vasik 2012-10-10 14:00:14 UTC
You are right, this bugzilla should be closed for a long time... closing now.


Note You need to log in before you can comment on or make changes to this bug.