Bug 111779 - excessive leading digits in top output in idle% column
excessive leading digits in top output in idle% column
Status: CLOSED RAWHIDE
Product: Fedora
Classification: Fedora
Component: procps (Show other bugs)
1
All Linux
medium Severity medium
: ---
: ---
Assigned To: Daniel Walsh
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2003-12-09 19:03 EST by dale sykora
Modified: 2007-11-30 17:10 EST (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2004-03-28 21:28:52 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description dale sykora 2003-12-09 19:03:14 EST
Description of problem:
top causes idle% column to have many leading digits (see output
below).  The original top from core 1 did not exhibit this behavior
although totals were off (see bug 109484).  

Version-Release number of selected component (if applicable):
top from procps-2.0.17-5.i386.rpm

How reproducible:
aproximately 1 out of 5 screen refreshes, when running seti on all 4
processors.  Does not exhibit problem without seti load.

Steps to Reproduce:
1.launch 4 instances of seti, each in seperate directory 
2.run top
3.watch idle column for strangeness
  
Actual results:
 17:55:47  up 1 day,  1:17,  1 user,  load average: 3.92, 3.41, 3.63
49 processes: 44 sleeping, 5 running, 0 zombie, 0 stopped
CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle
           total    0.4%  396.0%    3.6%   0.0%     0.0%    0.0% 
3681984845051806.0%
           cpu00    0.0%   98.8%    1.3%   0.0%     0.0%    0.0% 
3681984845051806.5%
           cpu01    0.0%   99.0%    1.1%   0.0%     0.0%    0.0% 
3681984845051806.5%
           cpu02    0.5%   98.6%    0.7%   0.0%     0.0%    0.0%    0.0%
           cpu03    0.0%   99.6%    0.3%   0.0%     0.0%    0.0%    0.0%
Mem:  1032080k av,  502588k used,  529492k free,       0k shrd,  
76660k buff
       115332k active,             259988k inactive
Swap: 1052632k av,       0k used, 1052632k free                 
230936k cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
 5081 root      26   1 15436  15M   676 R N  99.9  1.4   3:43   1 sah
 5082 root      26   1 14404  14M   672 R N  99.9  1.3   3:43   3 sah
 5079 root      26   1 15436  15M   676 R N  99.7  1.4   3:43   0 sah
 5080 root      26   1 15432  15M   672 R N  99.3  1.4   3:42   2 sah


Expected results:
 17:56:11  up 1 day,  1:17,  1 user,  load average: 3.94, 3.45, 3.64
49 processes: 44 sleeping, 5 running, 0 zombie, 0 stopped
CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle
           total    0.0%  395.2%    3.6%   0.0%     0.0%    0.0%    0.0%
           cpu00    0.0%   98.6%    1.1%   0.0%     0.0%    0.0%    0.1%
           cpu01    0.0%   99.0%    0.7%   0.0%     0.0%    0.0%    0.1%
           cpu02    0.3%   98.8%    0.7%   0.0%     0.0%    0.0%    0.0%
           cpu03    0.0%   99.0%    0.9%   0.0%     0.0%    0.0%    0.0%
Mem:  1032080k av,  501560k used,  530520k free,       0k shrd,  
76660k buff
       115332k active,             258960k inactive
Swap: 1052632k av,       0k used, 1052632k free                 
230936k cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
 5079 root      26   1 14408  14M   676 R N  99.9  1.3   4:07   0 sah
 5081 root      26   1 14408  14M   676 R N  99.9  1.3   4:07   1 sah
 5082 root      26   1 15432  15M   672 R N  99.9  1.4   4:07   3 sah
 5080 root      26   1 15432  15M   672 R N  99.4  1.4   4:06   2 sah


Additional info:
seti version 3.08 
server is proliant 8way with 4 P3Xeon/900MHZ/2MBcache
I have 2 other 8 way proliants with original core 1 package and can
provide other info if needed.
It was suggested I file this bug report (seperate from 109484) because
the fix worked for others.
Comment 1 Daniel Walsh 2004-02-11 09:08:18 EST
Could you check this with procps-3.1.15?
Comment 2 dale sykora 2004-02-24 02:22:33 EST
Daniel,
   I installed procps-3.1.15 and did not see the leading digit issue
any more.  However, no cpus are listed as before (only a summary as
shown below). Also, now some of the process lines (sah and top) and
metrics (except for top line) are bold (brighter).  I'm not sure if
this is by design.

top - 18:33:25 up 10:43,  1 user,  load average: 8.00, 8.01, 7.97
Tasks:  55 total,   9 running,  46 sleeping,   0 stopped,   0 zombie
Cpu(s):   0.0% user,   1.0% system,  99.0% nice,   0.0% idle
Mem:   2068272k total,   243500k used,  1824772k free,    33824k buffers
Swap:        0k total,        0k used,        0k free,    60384k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1149 root      26   1 15928  15m  840 R 99.9  0.8 632:12.56 sah
 1156 root      26   1 16336  15m  840 R 99.9  0.8 632:07.50 sah
 1150 root      26   1 14964  14m  840 R 99.9  0.7 632:18.71 sah
 1151 root      26   1 14840  14m  840 R 99.9  0.7 632:20.06 sah
 1152 root      26   1 15952  15m  840 R 99.9  0.8 632:16.63 sah
 1153 root      26   1 15844  15m  836 R 99.9  0.8 632:11.45 sah
 1154 root      26   1 15840  15m  836 R 99.9  0.8 632:16.80 sah
 1155 root      26   1 14812  14m  836 R 99.3  0.7 632:14.77 sah
 1462 root      16   0   940  940  780 R  0.7  0.0   0:02.28 top
    1 root      16   0   428  428  372 S  0.0  0.0   0:07.49 init
    2 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 swapper
    3 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 swapper
    4 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 swapper
    5 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 swapper
    6 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 swapper
    7 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 swapper
    8 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 swapper
Comment 3 Daniel Walsh 2004-03-28 21:28:52 EST
Yes this is how the upstream maintainer wants it.
Comment 4 Charles Mitchell 2004-09-16 05:05:42 EDT
The upstream maintainer wants it to not show per CPU stats?  Or they
want the bolding?  I was having the silly idle % bug as well until
getting procps-3 as above, but I'm sorry to lose the per CPU stats. 
Anyway, thanks.  Your doing much better than the guy in charge of X
and hardware.

Note You need to log in before you can comment on or make changes to this bug.