Bug 190176 - nice doesn't work on smp hardware
nice doesn't work on smp hardware
Status: CLOSED WONTFIX
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: kernel (Show other bugs)
4.0
All Linux
medium Severity high
: ---
: ---
Assigned To: Ingo Molnar
Brian Brock
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2006-04-28 10:48 EDT by mickael gastineau
Modified: 2012-06-20 09:18 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-06-20 09:18:07 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description mickael gastineau 2006-04-28 10:48:09 EDT
Description of problem:

In the next discussion, all processes are single-thread.
Processes, which a very low priority (such renice 19 pid), continue to use one CPU (about 99%) even if 
normal priority process must run. The problem on 4 processor IA64 
In next example, the process 18011 (with low priority) must use only about few percent of the CPU but 
it takes 99%.

Is it a kernel problem or a nice command problem ?

We need to run low priority job for a long time. 
So when normal priority jobs are not running, these jobs must take 100% of each available CPU. 
But when normal priority jobs are running, these jobs must take about 5% or less of CPU if no cpu are 
available.

Do you have any solution ?






Version-Release number of selected component (if applicable):
kernel 2.6.9-5.EL


How reproducible:

Requires a SMP hardware.
Run two jobs with a low priority (with the renice command : renice 19 pid) 
Run three or more jobs with a normal priority.

Steps to Reproduce:
1.
2.
3.
  
Actual results:

top - 15:32:32 up 359 days,  3:40,  7 users,  load average: 5.93, 5.55, 5.74
Tasks: 132 total,   8 running, 123 sleeping,   0 stopped,   1 zombie
Cpu(s): 73.7% us,  0.1% sy, 26.2% ni,  0.0% id,  0.0% wa,  0.0% hi,  0.0% si
Mem:   4072640k total,  3842976k used,   229664k free,    89632k buffers
Swap:  4171744k total,   436144k used,  3735600k free,  3434944k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                             
30934 gastin    25   0  3552 2736  816 R 99.9  0.1   8:42.00 inpopx                                                              
18011 gastin    39  19  5584 4560 1072 R 99.8  0.1  34804:20 inpop                                                               
30996 gastin    25   0  3552 2720  800 R 95.2  0.1   1:23.63 inpopx                                                              
30911 manche    25   0 13792 6256 2208 R 57.0  0.2   9:04.15 inpop                                                               
30913 manche    25   0 13792 6256 2208 R 42.6  0.2   9:08.59 inpop                                                               
25591 gastin    39  19  5584 4560 1072 R  4.8  0.1  26658:37 inpop                                                               

Expected results:


Additional info:
Comment 1 Luming Yu 2007-08-30 22:32:32 EDT
Isn't it a generic scheduler problem that can be observed on other architecture
besides ia64...?
Comment 2 Jiri Pallich 2012-06-20 09:18:07 EDT
Thank you for submitting this issue for consideration in Red Hat Enterprise Linux. The release for which you requested us to review is now End of Life. 
Please See https://access.redhat.com/support/policy/updates/errata/

If you would like Red Hat to re-consider your feature request for an active release, please re-open the request via appropriate support channels and provide additional supporting details about the importance of this issue.

Note You need to log in before you can comment on or make changes to this bug.