Bug 863998 - Can't move the app-gear for scalable applications
Can't move the app-gear for scalable applications
Product: OpenShift Origin
Classification: Red Hat
Component: Containers (Show other bugs)
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Mrunal Patel
libra bugs
Depends On:
  Show dependency treegraph
Reported: 2012-10-08 05:55 EDT by Rony Gong
Modified: 2015-05-14 19:00 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2012-11-06 13:50:30 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
development.log (10.14 KB, text/plain)
2012-10-08 05:55 EDT, Rony Gong
no flags Details

  None (edit)
Description Rony Gong 2012-10-08 05:55:47 EDT
Created attachment 623369 [details]

Description of problem:
Can't move the app-gear within district for scalable applications, could move other gear like haproxy-gear, mysql-5.1-gear, mongodb-gear, postgresql-gear

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1.Create scalable app like jbosseap
2.Get the app gear uuid by rest api
curl -k -X GET -H 'Accept: application/xml' --user qgong@redhat.com:111111 https://ec2-23-22-228-78.compute-1.amazonaws.com/broker/rest/domains/qgong14/applications/qsjbosseap/gears
          <proxy-host nil="true"></proxy-host>
          <internal-port nil="true"></internal-port>
          <proxy-port nil="true"></proxy-port>
3.Move the app-gear by the uuid within district
oo-admin-move  --gear_uuid b098ee02eb894085ad6c364f70fa6855 -i ip-10-120-203-251

Actual results:
[root@ip-10-120-203-251 openshift]# oo-admin-move  --gear_uuid b098ee02eb894085ad6c364f70fa6855 -i ip-10-120-203-251
URL: http://qsjbossas-qgong14.dev.rhcloud.com
Login: qgong@redhat.com
App UUID: 68fad773938d43c6bb73f5775f653639
Gear UUID: b098ee02eb894085ad6c364f70fa6855
DEBUG: Source district uuid: e0d75a19a52040349033acb53b45efef
DEBUG: Destination district uuid: e0d75a19a52040349033acb53b45efef
DEBUG: District unchanged keeping uid
DEBUG: Getting existing app 'qsjbossas' status before moving
DEBUG: Gear component 'jbossas-7' was running
DEBUG: Stopping existing app cartridge 'jbossas-7' before moving
DEBUG: Force stopping existing app cartridge 'jbossas-7' before moving
DEBUG: Creating new account for gear 'b098ee02eb' on ip-10-120-203-251
DEBUG: Moving content for app 'qsjbossas', gear 'b098ee02eb' to ip-10-120-203-251
Identity added: /var/www/openshift/broker/config/keys/rsync_id_rsa (/var/www/openshift/broker/config/keys/rsync_id_rsa)
Agent pid 2333
echo Agent pid 2333 killed;
DEBUG: Performing cartridge level move for 'jbossas-7' on ip-10-120-203-251
DEBUG: Moving failed.  Rolling back gear 'b098ee02eb' 'qsjbossas' with remove-httpd-proxy on 'ip-10-120-203-251'
DEBUG: Moving failed.  Rolling back gear 'b098ee02eb' in 'qsjbossas' with destroy on 'ip-10-120-203-251'
/usr/lib/ruby/gems/1.8/gems/openshift-origin-msg-broker-mcollective-0.4.3/lib/openshift-origin-msg-broker-mcollective/openshift/mcollective_application_container_proxy.rb:1324:in `run_cartridge_command_old': Node execution failure (invalid exit code from node).  If the problem persists please contact Red Hat support. (OpenShift::NodeException)
	from /var/www/openshift/broker/lib/express/broker/mcollective_ext.rb:11:in `run_cartridge_command'
	from /usr/lib/ruby/gems/1.8/gems/openshift-origin-msg-broker-mcollective-0.4.3/lib/openshift-origin-msg-broker-mcollective/openshift/mcollective_application_container_proxy.rb:795:in `send'
	from /usr/lib/ruby/gems/1.8/gems/openshift-origin-msg-broker-mcollective-0.4.3/lib/openshift-origin-msg-broker-mcollective/openshift/mcollective_application_container_proxy.rb:795:in `move_gear'
	from /usr/lib/ruby/gems/1.8/gems/openshift-origin-msg-broker-mcollective-0.4.3/lib/openshift-origin-msg-broker-mcollective/openshift/mcollective_application_container_proxy.rb:769:in `each'
	from /usr/lib/ruby/gems/1.8/gems/openshift-origin-msg-broker-mcollective-0.4.3/lib/openshift-origin-msg-broker-mcollective/openshift/mcollective_application_container_proxy.rb:769:in `move_gear'
	from /usr/bin/oo-admin-move:111

Expected results:
move success.

Additional info:
This error happened for all application type. This works for non-scalable app.
Comment 1 Rony Gong 2012-10-08 06:26:53 EDT
This kind error also happened for move across district
Comment 2 Dan McPherson 2012-10-08 10:30:54 EDT
It seems the repo dir on the scaled app has changed from appname to gear name.  Shouldn't it still be appname?


This is the error I got:

/usr/libexec/openshift/cartridges/abstract/info/bin/redeploy_config_dir.sh: line 11: pushd: /var/lib/openshift/27b0d30d18b044f8acae6522cf9cc165/git/danmcp990.git: No such file or directory
fatal: Not a git repository (or any of the parent directories): .git
Nothing found in .openshift/config/* to redeploy
/usr/libexec/openshift/cartridges/abstract/info/bin/redeploy_config_dir.sh: line 16: popd: directory stack empty

And this was the dir on that gear:

Comment 3 Mrunal Patel 2012-10-08 19:50:15 EDT
Fixed with https://github.com/openshift/origin-server/pull/623.
Comment 4 Xiaoli Tian 2012-10-09 21:30:10 EDT
(In reply to comment #3)
> Fixed with https://github.com/openshift/origin-server/pull/623.

Above pull request is merged since devenv_2302, move it to ON_QA to verify.
Comment 5 Jianwei Hou 2012-10-10 00:32:24 EDT
Verified on devenv_2304

1. Setup multi-node env
2. Create a scalable app, call rest api to get app gear uuid

3. Move this gear within district
[root@ip-10-110-222-171 openshift]# oo-admin-move --gear_uuid 08eaaee26f584484b0f804065778032d -i ip-10-110-222-171
Move was successfull
4. Setup another district, move gear across districts
[root@ip-10-110-222-171 openshift]# oo-admin-move --gear_uuid a1209f6ee1004f44b58c86503f75df04 -i ip-10-110-222-171 --allow_change_district

Both haproxy gear and cartridge gear are moved within/across district successfully.

Note You need to log in before you can comment on or make changes to this bug.