Bug 1229300
Summary: | oo-admin-move across node profiles should update quota limits appropriately | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Miheer Salunke <misalunk> |
Component: | oc | Assignee: | Timothy Williams <tiwillia> |
Status: | CLOSED ERRATA | QA Contact: | libra bugs <libra-bugs> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 2.2.0 | CC: | adellape, anli, jokerman, libra-onpremise-devel, misalunk, mmccomas, tiwillia, yanpzhan |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | rubygem-openshift-origin-msg-broker-mcollective-1.35.3.1-1.el6op | Doc Type: | Bug Fix |
Doc Text: |
Previously, when moving a non-scaled application across node profiles, the proper quota for the new profile was not applied to the gear. The gear still used the quota from its previous gear size. Additionally, any additional gear storage was not added to the quota of the new gear. This bug fix ensures the new node profile's quota limits are used, taking into account additional storage the gear may have. As a result, gears moved across node profiles have the proper quota applied, and additional gear storage still exists after the move.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2015-09-30 16:37:54 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Miheer Salunke
2015-06-08 12:18:16 UTC
Could you supply the oo-admin-move command you used to move the gear? I believe the quotas are actually changed when the gear is moved to a small node. Run the below in your gear to see if the current limits match the "small" node profile's resource limits or the "large" node profile's: # oo-cgroup-read memory.limit_in_bytes # oo-cgroup-read cpu.cfs_quota_us You are correct in that the system still reports the gear as the initial, larger size (which is certainly an issue). Use this command to reproduce the bug oo-admin-move --gear_uuid 54ff0733c3215e8027002de2 -p small --change_district Please find the output of the commands from gears : ]\> oo-cgroup-read memory.limit_in_bytes 536870912 ]\> oo-cgroup-read cpu.cfs_quota_us 30000 So they have the memory.limit_in_bytes and the cpu.cfs_quota_us from their dev-small profile. But when they do a quota -s : ]\> quota -s Disk quotas for user 54ff0733c3215e8027002de2 (uid 6917): Filesystem blocks quota limit grace files quota limit grace /dev/mapper/rootvg-openshift_lv 22796 0 3072M 901 0 160k They have the quota inodes and blocks of the medium (previous) profile. As a workaround they need to st quotas to off, to turn them on again and to reapply indoes and blocks quotas. Thanks for the info. Looks like the cgroup configuration is changed as it should be, but the quotas are indeed still set to the quotas defined by the original node profile. The fix for this shouldn't be too difficult. Commit pushed to master at https://github.com/openshift/origin-server https://github.com/openshift/origin-server/commit/4908a102bbb3c7a7f742f16841a250782701039e Ensure proper quota is used when moving gears across node profiles Bug 1229300 https://bugzilla.redhat.com/show_bug.cgi?id=1229300 When moving gears across node profiles, ensure that the quota for the new node profile is imposed on the gear during the move. QE need a new puddle/package to verify this bug. Check in the puddle-2-2-2015-09-17: # rpm -qa|grep broker-mcollective rubygem-openshift-origin-msg-broker-mcollective-1.34.1.1-1.el6op.noarch It doesn't contain the fix pkg: rubygem-openshift-origin-msg-broker-mcollective-1.35.3.1-1.el6op The bug still can be reproduced. Found a small issue on puddle 2015-09-18.2, after moved a medium gear to small profile node, the "rhc app show <appname>" still showed "Gears: 1 medium".(Same when moved small gear to medium profile node). Detailed info as below: [root@dhcp-129-219 ~]# rhc app show pltest pltest @ http://pltest-domzyp.ose22-auto.com.cn/ (uuid: 55ff783782611d559a000027) --------------------------------------------------------------------------------- Domain: domzyp Created: 11:23 AM Gears: 1 (defaults to medium) Git URL: ssh://domzyp-pltest-1.com.cn/~/git/pltest.git/ SSH: domzyp-pltest-1.com.cn Deployment: auto (on git push) perl-5.10 (Perl 5.10) --------------------- Gears: 1 medium [root@dhcp-129-219 ~]# rhc app show pltest -g ID State Cartridges Size SSH URL --------------- ------- ---------- ----- ----------------------------------------------- domzyp-pltest-1 started perl-5.10 small domzyp-pltest-1.com.cn Opened a bug to track issue in Comment15: https://bugzilla.redhat.com/show_bug.cgi?id=1265528 Since the original bug has been fixed in current puddle ose 2.2/2015-09-22.1, so move the bug to Verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1844.html |