Bug 990499 - Abused gear will not be throttled automatically
Summary: Abused gear will not be throttled automatically
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Online
Classification: Red Hat
Component: Containers
Version: 2.x
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Fotios Lindiakos
QA Contact: libra bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-07-31 10:09 UTC by Meng Bo
Modified: 2015-05-14 23:25 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-08-07 22:58:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Meng Bo 2013-07-31 10:09:08 UTC
Description of problem:
Create gear and use some script to burn up cpu in the gear. Monitor the gear via rhc-watchman. Gear should be throttled since the abuse.

Version-Release number of selected component (if applicable):
devenv_3591

How reproducible:
always

Steps to Reproduce:
1.Create an app
2.SSH login to the gear run the following script to generate high CPU performance
for i in `seq 1 4`;
 do ( while true; do true; done ) & 
done
3. Check if the gear cgroup can be throttled

Actual results:
Gear will not be throttled.
There is no related info in /var/log/messages
There is no related info in /var/log/openshift/node/cgroup.log

Expected results:
Gear should be throttled if it burn up cpu.

Additional info:
Should be a regression of bug 989706

Comment 1 Fotios Lindiakos 2013-07-31 18:04:28 UTC
Original PR was filed under the wrong BZ, this is the PR: https://github.com/openshift/origin-server/pull/3242

Comment 2 Meng Bo 2013-08-01 04:26:35 UTC
Checked on devenv-stage_429, issue fixed.

Gear will be throttled with abuse. Related log can be found in /var/log/messages and cgroup-trace.log

#tailf /var/log/messages
Aug  1 00:20:12 ip-10-245-134-221 python: rhcsh
Aug  1 00:20:28 ip-10-245-134-221 rhc-watchman[1923]: Running rhc-watchman => delay: 20s, exception threshold: 10
Aug  1 00:20:48 ip-10-245-134-221 rhc-watchman[1923]: Running rhc-watchman => delay: 20s, exception threshold: 10
Aug  1 00:21:08 ip-10-245-134-221 rhc-watchman[1923]: Running rhc-watchman => delay: 20s, exception threshold: 10
Aug  1 00:21:08 ip-10-245-134-221 rhc-watchman[1923]: Throttler: throttle => 7e534df0fa6111e2a7d312313d01852f (462.808)


#tailf /var/log/openshift/node/cgroups-trace.log
7e534df0fa6111e2a7d312313d01852f/cpuacct.usage:18802369166
7e534df0fa6111e2a7d312313d01852f/cpu.cfs_quota_us:30000

August 01 00:21:41 INFO oo_spawn running grep -H "" */{cpu.stat,cpuacct.usage,cpu.cfs_quota_us} 2> /dev/null: {:unsetenv_others=>false, :close_others=>true, :in=>"/dev/null", :chdir=>"/cgroup/all/openshift", :out=>#<IO:fd 12>, :err=>#<IO:fd 10>}
August 01 00:21:41 INFO oo_spawn buffer(11/) 7e534df0fa6111e2a7d312313d01852f/cpu.stat:nr_periods 744
7e534df0fa6111e2a7d312313d01852f/cpu.stat:nr_throttled 328
7e534df0fa6111e2a7d312313d01852f/cpu.stat:throttled_time 21575515452
7e534df0fa6111e2a7d312313d01852f/cpuacct.usage:20308906976
7e534df0fa6111e2a7d312313d01852f/cpu.cfs_quota_us:30000


Note You need to log in before you can comment on or make changes to this bug.