Bug 109484 - top doesn't display correct percentage totals
top doesn't display correct percentage totals
Status: CLOSED ERRATA
Product: Fedora
Classification: Fedora
Component: procps (Show other bugs)
1
i686 Linux
medium Severity medium
: ---
: ---
Assigned To: Alexander Larsson
Brian Brock
:
: 111216 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2003-11-08 08:01 EST by Stephen Reindl
Modified: 2007-11-30 17:10 EST (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2003-12-09 12:32:02 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Initialize timing values (592 bytes, patch)
2003-11-09 16:44 EST, Stephen Reindl
no flags Details | Diff
Init process timing values before use (601 bytes, patch)
2003-11-10 11:30 EST, Stephen Reindl
no flags Details | Diff
Smaller change to fix this bug (582 bytes, patch)
2003-12-08 09:42 EST, Michael K. Johnson
no flags Details | Diff
It would help if I attached the right patch (625 bytes, patch)
2003-12-08 09:50 EST, Michael K. Johnson
no flags Details | Diff

  None (edit)
Description Stephen Reindl 2003-11-08 08:01:53 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; de-AT; rv:1.4.1)
Gecko/20031030

Description of problem:
When stating top, the tool display all processes and the percentage
CPU usage per process but it doesn't display the totals correctly, the
user, system and idle totals are always zero.

Version-Release number of selected component (if applicable):
procps-2.0.17-1

How reproducible:
Always

Steps to Reproduce:
1. start top


Actual Results:  All totals (execpt for IRQs) are zero

Expected Results:  CPU totals for all values (user, system, idle) in
sum have to result in 100 % (for a single CPU)

Additional info:

see the output of the top command here:

 
 14:07:04  up  2:14,  4 users,  load average: 1.39, 2.11, 1.68
77 processes: 76 sleeping, 1 running, 0 zombie, 0 stopped
CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle
           total    0.0%    0.0%    0.0%   0.0%     0.0%    0.0%    0.0%
Mem:   385032k av,  374420k used,   10612k free,       0k shrd,  
26920k buff
        85012k active,             255308k inactive
Swap:  393112k av,   41348k used,  351764k free                 
194412k cached
Comment 1 Stephen Reindl 2003-11-09 16:44:41 EST
Created attachment 95857 [details]
Initialize timing values

This patch initializes the values to be read before doing the scanfs. In case
of old kernels (in my case 2.4.22-1.2115.nptl from FC 1), this resolves the
problem.
Comment 2 Stephen Reindl 2003-11-10 11:30:35 EST
Created attachment 95878 [details]
Init process timing values before use

This solution provides initialization of the values only once. 

Anyhow, what might be missing is not to display the unused values for existing
kernels at all.
Comment 4 Allen Kistler 2003-11-30 02:06:51 EST
I've just noticed this buf myself.
I also notice that top displays correct total numbers if run as root.
Comment 5 Jens Andersen 2003-11-30 06:36:11 EST
*** Bug 111216 has been marked as a duplicate of this bug. ***
Comment 6 Ben Stringer 2003-12-03 09:01:55 EST
> I also notice that top displays correct total numbers if run as root.

I don't see this - the bug is the same when top is run as root.
Comment 7 dale sykora 2003-12-05 19:00:43 EST
I have the same problem with an 8way Proliant box.
Output of top with 8 seti clients running below.

 17:42:31  up  4:18,  2 users,  load average: 7.97, 5.60, 2.60
60 processes: 51 sleeping, 9 running, 0 zombie, 0 stopped
CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle
           total    0.0%    0.0%    0.0%   0.8%     0.0%    1.6%    0.0%
           cpu00    0.0%    0.0%    0.0%   0.0%     0.0%    0.0%    0.0%
           cpu01    0.0%    0.0%    0.0%   3.0%     1.4%    1.9%    0.0%
           cpu02    0.0%    0.0%    0.0%   0.2%     0.2%    0.4%    0.0%
           cpu03    0.0%    0.0%    0.0%   1.1%     0.9%    1.0%    0.0%
           cpu04    0.0%    0.0%    0.0%   0.6%     2.5%    0.0%    0.0%
           cpu05    0.0%    0.0%    0.0%   0.0%     0.0%    0.0%    0.0%
           cpu06    0.0%    0.0%    0.0%   0.0%     0.0%    0.0%    0.0%
           cpu07    0.0%    0.0%    0.0%   0.0%     0.1%    0.1%    0.0%
Mem:  2068492k av,  240700k used, 1827792k free,       0k shrd,  
11352k buff
        67516k active,             149476k inactive
Swap: 1052632k av,       0k used, 1052632k free                  
89168k cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
 1368 root      26   1 14508  14M   672 R N  99.9  0.7   6:03   6 sah
 1369 root      26   1 14612  14M   816 R N  99.9  0.7   6:02   7 sah
 1372 root      26   1 14612  14M   816 R N  99.9  0.7   5:51   1 sah
 1373 root      26   1 14612  14M   816 R N  99.9  0.7   5:45   2 sah
 1374 root      26   1 14612  14M   816 R N  99.9  0.7   5:43   3 sah
 1375 root      26   1 14612  14M   816 R N  99.9  0.7   5:36   4 sah
 1371 root      26   1 14612  14M   816 R N  99.7  0.7   5:59   0 sah
 1370 root      26   1 14560  14M   816 R N  99.6  0.7   6:01   5 sah
 1366 root      16   0  1072 1072   888 R     0.3  0.0   0:01   5 top
Comment 8 Michael K. Johnson 2003-12-08 09:42:28 EST
Created attachment 96397 [details]
Smaller change to fix this bug

Hmm, seems like just initializing the memory when allocating instead of
allocating and then clearing would be simpler.	Here's the patch to the
patch which introduces this code.
Comment 9 Michael K. Johnson 2003-12-08 09:50:36 EST
Created attachment 96398 [details]
It would help if I attached the right patch

Same fix, correct version of patch patched...
Comment 10 Alexander Larsson 2003-12-09 04:58:34 EST
Does this fixed package work?

http://people.redhat.com/alexl/RPMS/procps-2.0.17-5.i386.rpm

If it does i'll push it out as an update.
Comment 11 Andreas Müller 2003-12-09 05:58:17 EST
Yes, at least for me it works.

Thanks!
Comment 12 Paul Nasrat 2003-12-09 10:04:32 EST
Tested here on FC1 with update and it does correct it here too.

Please push to updates/testing
Comment 13 Paul Nasrat 2003-12-09 10:19:10 EST
Out of curiosity could you dump a srpm in your p.r.c repo, I'd like to
test a rebuild with selinux turned.
Comment 14 dale sykora 2003-12-09 10:30:22 EST
The new rpm improves things but now idle% has too many digits(see below).


CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle
           total    0.0%  397.2%    2.8%   0.0%     0.0%    0.0% 
3681984845051806.4%
           cpu00    0.0%   99.0%    0.9%   0.0%     0.0%    0.0%    0.0%
           cpu01    0.0%   99.2%    0.7%   0.0%     0.0%    0.0%    0.0%
           cpu02    0.0%   99.0%    0.9%   0.0%     0.0%    0.0%    0.0%
           cpu03    0.0%  100.0%    0.1%   0.0%     0.0%    0.0% 
3681984845051806.5%
Mem:  1032080k av,  475124k used,  556956k free,       0k shrd,  
74220k buff
       100144k active,             260548k inactive
Swap: 1052632k av,       0k used, 1052632k free                 
223588k cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
 1194 root      26   1 14916  14M   820 R N  99.9  1.4 990:40   2 sah
 1195 root      26   1 15984  15M   824 R N  99.9  1.5 990:44   0 sah
 1196 root      26   1 14928  14M   824 R N  99.9  1.4 990:45   3 sah
 1193 root      26   1 14908  14M   824 R N  99.7  1.4 990:44   1 sah
 4192 root      16   0  1076 1076   892 R     0.1  0.1   0:00   1 top
    1 root      16   0   364  364   312 S     0.0  0.0   0:07   0 init
    2 root      RT   0     0    0     0 SW    0.0  0.0   0:00   0 swapper
    3 root      RT   0     0    0     0 SW    0.0  0.0   0:00   1 swapper
    4 root      RT   0     0    0     0 SW    0.0  0.0   0:00   2 swapper
    5 root      RT   0     0    0     0 SW    0.0  0.0   0:00   3 swapper
    6 root      15   0     0    0     0 SW    0.0  0.0   0:00   0 keventd
    7 root      34  19     0    0     0 SWN   0.0  0.0   0:00   0
ksoftirqd/0
    8 root      34  19     0    0     0 SWN   0.0  0.0   0:00   1
ksoftirqd/1
    9 root      34  19     0    0     0 SWN   0.0  0.0   0:00   2
ksoftirqd/2
   10 root      34  19     0    0     0 SWN   0.0  0.0   0:00   3
ksoftirqd/3
   12 root      25   0     0    0     0 SW    0.0  0.0   0:00   0 bdflush
   11 root      25   0     0    0     0 SW    0.0  0.0   0:00   0 kswapd
Comment 15 Panu Matilainen 2003-12-09 11:05:46 EST
Looks ok here with the new version:

 18:07:41  up 3 days,  2:34,  7 users,  load average: 0.73, 0.30, 0.36
96 processes: 92 sleeping, 3 running, 1 zombie, 0 stopped
CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle
           total   73.9%    0.0%   26.0%   0.0%     0.0%    0.0%    0.0%
Comment 16 Alexander Larsson 2003-12-09 12:32:02 EST
update pushed.

I guess the idle thing is some other bug... :( 
Please file a bug for that.
Comment 17 Pete Zaitcev 2003-12-09 13:12:25 EST
 10:12:48  up 36 min,  3 users,  load average: 0.12, 0.24, 0.12
65 processes: 63 sleeping, 2 running, 0 zombie, 0 stopped
CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle
           total   10.9%    0.0%    2.5%   0.0%     0.0%    0.0%   86.5%

Idle is calculated correctly for me. I observe that IRQ is always
zero still, but that information may not be available, it probably
gets accounted to a random process.

Dale needs to open another bug.

Comment 18 Phil Anderson 2003-12-09 18:25:13 EST
Applied the latest errata.... A lot better, but still has problems
with multiple CPU.  The IDLE % for the total needs to be divided by
the number of CPUs!  It is a bug I can live with though :)

CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle
           total    0.0%    0.0%    1.8%   0.0%     0.0%    0.0%  198.0%
           cpu00    0.0%    0.0%    1.9%   0.0%     0.0%    0.0%   98.0%
           cpu01    0.0%    0.0%    0.0%   0.0%     0.0%    0.0%  100.0%


Comment 19 Andreas Müller 2003-12-09 18:32:14 EST
I found that you can change this behaviour by pressing S. It's not a
bug, it's a feature :-)
Comment 20 dale sykora 2003-12-09 19:06:21 EST
I filed a new bug report(111779) for the problem I am experiencing. 
Pressing S doen't change the bug for me.
Comment 21 Andreas Müller 2003-12-09 19:09:47 EST
Doh, I should read the man page... You have to press I, not S...
Comment 22 Andreas Müller 2003-12-09 19:17:40 EST
Dale,

sorry for the noise, but I should also read your bug report (I think I
have to go to bed, it's 1:17 am here in Old Europe...)

Of course pressing "I" doesn't solve the problem with lots of digits
in the idle value, but divides the idle value by the number of CPUs.

Note You need to log in before you can comment on or make changes to this bug.