Description of problem: An application has been created with a medium gear size. Then, we had to move it on a specific node for test purpose. The destination node was a small one. The oo-admin-move went without problem and no error has been raised. The application gear size and quotas have not been changed by this operation. So it is now on a small node with a medium gear size. Version-Release number of selected component (if applicable): 2.2 How reproducible: Always Steps to Reproduce: 1.Create an application with medium size gear. 2.Move it on a node having small gear profile. Actual results: oo-admin-move command shall raise an error The application gear size, quotas and limits don't change Expected results: oo-admin-move command shall raise an error Additional info:
Could you supply the oo-admin-move command you used to move the gear? I believe the quotas are actually changed when the gear is moved to a small node. Run the below in your gear to see if the current limits match the "small" node profile's resource limits or the "large" node profile's: # oo-cgroup-read memory.limit_in_bytes # oo-cgroup-read cpu.cfs_quota_us You are correct in that the system still reports the gear as the initial, larger size (which is certainly an issue).
Use this command to reproduce the bug oo-admin-move --gear_uuid 54ff0733c3215e8027002de2 -p small --change_district Please find the output of the commands from gears : ]\> oo-cgroup-read memory.limit_in_bytes 536870912 ]\> oo-cgroup-read cpu.cfs_quota_us 30000 So they have the memory.limit_in_bytes and the cpu.cfs_quota_us from their dev-small profile. But when they do a quota -s : ]\> quota -s Disk quotas for user 54ff0733c3215e8027002de2 (uid 6917): Filesystem blocks quota limit grace files quota limit grace /dev/mapper/rootvg-openshift_lv 22796 0 3072M 901 0 160k They have the quota inodes and blocks of the medium (previous) profile. As a workaround they need to st quotas to off, to turn them on again and to reapply indoes and blocks quotas.
Thanks for the info. Looks like the cgroup configuration is changed as it should be, but the quotas are indeed still set to the quotas defined by the original node profile. The fix for this shouldn't be too difficult.
https://github.com/openshift/origin-server/pull/6172
Commit pushed to master at https://github.com/openshift/origin-server https://github.com/openshift/origin-server/commit/4908a102bbb3c7a7f742f16841a250782701039e Ensure proper quota is used when moving gears across node profiles Bug 1229300 https://bugzilla.redhat.com/show_bug.cgi?id=1229300 When moving gears across node profiles, ensure that the quota for the new node profile is imposed on the gear during the move.
QE need a new puddle/package to verify this bug.
Check in the puddle-2-2-2015-09-17: # rpm -qa|grep broker-mcollective rubygem-openshift-origin-msg-broker-mcollective-1.34.1.1-1.el6op.noarch It doesn't contain the fix pkg: rubygem-openshift-origin-msg-broker-mcollective-1.35.3.1-1.el6op The bug still can be reproduced.
Found a small issue on puddle 2015-09-18.2, after moved a medium gear to small profile node, the "rhc app show <appname>" still showed "Gears: 1 medium".(Same when moved small gear to medium profile node). Detailed info as below: [root@dhcp-129-219 ~]# rhc app show pltest pltest @ http://pltest-domzyp.ose22-auto.com.cn/ (uuid: 55ff783782611d559a000027) --------------------------------------------------------------------------------- Domain: domzyp Created: 11:23 AM Gears: 1 (defaults to medium) Git URL: ssh://domzyp-pltest-1.com.cn/~/git/pltest.git/ SSH: domzyp-pltest-1.com.cn Deployment: auto (on git push) perl-5.10 (Perl 5.10) --------------------- Gears: 1 medium [root@dhcp-129-219 ~]# rhc app show pltest -g ID State Cartridges Size SSH URL --------------- ------- ---------- ----- ----------------------------------------------- domzyp-pltest-1 started perl-5.10 small domzyp-pltest-1.com.cn
Opened a bug to track issue in Comment15: https://bugzilla.redhat.com/show_bug.cgi?id=1265528 Since the original bug has been fixed in current puddle ose 2.2/2015-09-22.1, so move the bug to Verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1844.html