Bug 137577 - failures of the process accounting in ps, top, and time
failures of the process accounting in ps, top, and time
Status: CLOSED NOTABUG
Product: Red Hat Enterprise Linux 3
Classification: Red Hat
Component: kernel (Show other bugs)
3.0
i686 Linux
medium Severity medium
: ---
: ---
Assigned To: Ernie Petrides
Brian Brock
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2004-10-29 12:00 EDT by Allen Brown
Modified: 2007-11-30 17:07 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2004-10-29 20:38:54 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Allen Brown 2004-10-29 12:00:29 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6)
Gecko/20040207 Firefox/0.8

Description of problem:
Process timing measurement is incorrect.  Also note that top, ps, and
/proc will not charge processor time to tasks which complete their
load in less than 1/HZ (a "jiffy").

Version-Release number of selected component (if applicable):
kernel-smp-2.4.21-9.EL

How reproducible:
Always

Steps to Reproduce:
1.exec'ed an app (eatcpu)
2. exec top (see output #1 below)
3. kill eatcpu
4. see output #2 below
    

Actual Results:  #1:
63 processes: 61 sleeping, 2 running, 0 zombie, 0 stopped
CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle
           total   25.1%    0.0%    0.0%   0.0%     0.0%    0.0%   74.8%
           cpu00    0.0%    0.0%    0.0%   0.0%     0.0%    0.0%  100.0%
           cpu01   69.0%    0.0%    0.0%   0.0%     0.0%    0.0%   31.0%
           cpu02    0.0%    0.0%    0.0%   0.0%     0.0%    0.0%  100.0%
           cpu03   31.5%    0.0%    0.0%   0.0%     0.0%    0.0%   68.5%
Mem:  3950388k av,  602488k used, 3347900k free,       0k shrd, 
162580k buff
       331096k active,              25784k inactive
Swap: 4096312k av,       0k used, 4096312k free                 
165716k cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
12746 root      25   0   640  640   560 R    24.9  0.0   0:13   3 eatcpu
    1 root      15   0   500  500   440 S     0.0  0.0   1:28   3 init
    2 root      RT   0     0    0     0 SW    0.0  0.0   0:00   0
migration/0

----------------------------------------------------
#2
62 processes: 61 sleeping, 1 running, 0 zombie, 0 stopped
CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle
           total    0.0%    0.0%    0.0%   0.0%     0.0%    0.0%   99.9%
           cpu00    0.0%    0.0%    0.0%   0.0%     0.0%    0.0%  100.0%
           cpu01    0.0%    0.0%    0.0%   0.0%     0.0%    0.0%  100.0%
           cpu02    0.0%    0.0%    0.0%   0.0%     0.0%    0.2%   99.8%
           cpu03    0.0%    0.0%    0.0%   0.0%     0.0%    0.0%  100.0%
Mem:  3950388k av,  602480k used, 3347908k free,       0k shrd, 
162580k buff
       331008k active,              25784k inactive
Swap: 4096312k av,       0k used, 4096312k free                 
165716k cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
    1 root      15   0   500  500   440 S     0.0  0.0   1:28   1 init
    2 root      RT   0     0    0     0 SW    0.0  0.0   0:00   0
migration/0
    3 root      RT   0     0    0     0 SW    0.0  0.0   0:00   1
migration/1



Expected Results:  Per CPU%'s reported in CPU states should match (or
total) CPU%'s reported per PID.

Additional info:
Comment 2 Rik van Riel 2004-10-29 14:20:28 EDT
Reassigning to PM since this is a feature request.
Comment 3 Ernie Petrides 2004-10-29 20:38:54 EDT
Hello, Allen.  It looks to me like the cpu usage %'s are correct,
although obviously the system-wide totals have a factor of 4 figured
in (on a 4-cpu system).  In your 1st set of results, you have:

  (0.0 + 69.0 + 0.0 + 31.5) / 4 = 25.1

There is obviously some minor variation from one instant to the next
(from when "top" runs to when "ps" runs).  And the fact that process
timing is done in 1/HZ intervals is by design (in order to have low
overhead).

Note You need to log in before you can comment on or make changes to this bug.