Description of problem: I am trying to move a gear from one node to another. It has a quota of 2 Gig, and is using 1.2 Gig. Although it appears that everything is transfered over, to the destination node, the quota is only set to 1 Gig. The lower quota causes the startup to fail, which causes the move to fail. Version-Release number of selected component (if applicable): openshift-origin-broker-util-1.13.11-1.el6oso How reproducible: For the gear I am working on, 100% Steps to Reproduce: 1. /usr/sbin/oo-admin-move -i <new-node> --gear_uuid <UUID> 2. 3. Actual results: Thu Sep 12 15:17:45 EDT 2013 URL: http://<URL> Login: <LOGIN> App UUID: <UUID> Gear UUID: <UUID> DEBUG: Source district uuid: <DISTRICT UUID> DEBUG: Destination district uuid: <DISTRICT UUID> DEBUG: Getting existing app 'www1' status before moving DEBUG: Gear component 'php-5.3' was running DEBUG: Stopping existing app cartridge 'haproxy-1.4' before moving DEBUG: Stopping existing app cartridge 'php-5.3' before moving DEBUG: Force stopping existing app cartridge 'php-5.3' before moving DEBUG: Reserved uid '<UID>' on district: '<UUID>' DEBUG: Creating new account for gear 'www1' on <NEW NODE> DEBUG: Moving content for app 'www1', gear 'www1' to <NEW NODE> Identity added: /var/www/openshift/broker/config/keys/rsync_id_rsa (/var/www/openshift/broker/config/keys/rsync_id_rsa) Agent pid 22331 unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 22331 killed; DEBUG: Moving system components for app 'www1', gear 'www1' to <NEW NODE> Identity added: /var/www/openshift/broker/config/keys/rsync_id_rsa (/var/www/openshift/broker/config/keys/rsync_id_rsa) Agent pid 22883 unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 22883 killed; DEBUG: Starting cartridge 'haproxy-1.4' in 'www1' after move on <NEW NODE> DEBUG: Moving failed. Rolling back gear 'www1' in 'www1' with delete on '<NEW NODE>' Node execution failure (invalid exit code from node). Expected results: The move would be successful. Additional info: ** Quota on the original node: quota -v <UUID> Disk quotas for user <UUID> (uid <UID>): Filesystem blocks quota limit grace files quota limit grace /dev/mapper/EBSStore01-user_home01 1256968 0 2097152 17978 0 80000 ** snippit from mcollective.log on <NEW NODE> I, [2013-09-12T15:19:42.631300 #1209] INFO -- : openshift.rb:134:in `execute_action' Finished executing action [start] (1) I, [2013-09-12T15:19:42.712268 #1209] INFO -- : openshift.rb:100:in `cartridge_do_action' cartridge_do_action failed (1) ------ Failed to execute: 'control start' for /var/lib/openshift/<UUID>/haproxy Error writing to temporary file Error writing to temporary file CLIENT_MESSAGE: Warning gear <UUID> is using 117.95692443847656 of disk quota ------)
reproduced this bug on devenv_3780, the step is 1) create one scalable app with mongodb 2) add 1g storage for mongdb cartridge 3) rhc ssh mongodb gear and create one 1.5G file dd if=/dev/zero of=~app-root/data/test bs=1M count=1500 [182409630553174657990656-zqd.dev.rhcloud.com 182409630553174657990656]\> quota -s Disk quotas for user 182409630553174657990656 (uid 5199): Filesystem blocks quota limit grace files quota limit grace /dev/xvde2 1981M 0 2048M 42 0 40000 4) move this mongodb gear 5) check quota [182409630553174657990656-zqd.dev.rhcloud.com 182409630553174657990656]\> quota -s Disk quotas for user 182409630553174657990656 (uid 5199): Filesystem blocks quota limit grace files quota limit grace /dev/xvde2 1981M* 0 1024M 42 0 40000
test some scenarios as below: 1) non-scale app is ok 2) scale app web framework gear is ok 3) scale app with mongodb(comment 1), move this mongodb gear is success. but after moving, the mongodb gear qutota size will be reset (refer to comment 1) 4) scale app with mysql or postgresql, only need to change comment 1 steps to mysql or postgresql db, move that mysql or postgresql db gear will fail : [root@ip-10-154-184-93 openshift]# oo-admin-move --gear_uuid 386038769764552255995904 -i ip-10-184-6-242 URL: http://zqphps-zqd.dev.rhcloud.com Login: zzhao App UUID: 5232e0f8bef23b67e700005c Gear UUID: 5232e138bef23b67e7000080 DEBUG: Source district uuid: c0a525681c2411e3aad322000a9ab85d DEBUG: Destination district uuid: c0a525681c2411e3aad322000a9ab85d DEBUG: Getting existing app 'zqphps' status before moving DEBUG: Gear component 'php-5.3' was running DEBUG: Stopping existing app cartridge 'mysql-5.1' before moving DEBUG: Creating new account for gear '386038769764552255995904' on ip-10-184-6-242 DEBUG: Moving content for app 'zqphps', gear '386038769764552255995904' to ip-10-184-6-242 Identity added: /var/www/openshift/broker/config/keys/rsync_id_rsa (/var/www/openshift/broker/config/keys/rsync_id_rsa) Agent pid 20200 unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 20200 killed; DEBUG: Moving system components for app 'zqphps', gear '386038769764552255995904' to ip-10-184-6-242 Identity added: /var/www/openshift/broker/config/keys/rsync_id_rsa (/var/www/openshift/broker/config/keys/rsync_id_rsa) Agent pid 20714 unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 20714 killed; DEBUG: Starting cartridge 'mysql-5.1' in 'zqphps' after move on ip-10-184-6-242 DEBUG: Moving failed. Rolling back gear '386038769764552255995904' in 'zqphps' with delete on 'ip-10-184-6-242' Node execution failure (invalid exit code from node).
Fixed with --> https://github.com/openshift/origin-server/pull/3633
Commit pushed to master at https://github.com/openshift/origin-server https://github.com/openshift/origin-server/commit/6a327595fe9315061216ee08221d0b78be3dd31d Fix for bug 1007582 and bug 1008517
tested this bug on devenv_stage_477 have checked all scenarios in Comment 2 and work well. so changed to verified.