Bug 1028368 - Fail to move a non-scalable application across district with different size profile
Fail to move a non-scalable application across district with different size ...
Status: CLOSED WORKSFORME
Product: OpenShift Container Platform
Classification: Red Hat
Component: Pod (Show other bugs)
2.0.0
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Luke Meyer
libra bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-11-08 05:01 EST by Ma xiaoqiang
Modified: 2017-03-08 12 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-01-30 14:37:33 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ma xiaoqiang 2013-11-08 05:01:50 EST
Description of problem:
Fail to move a non-scalable application across district with different size profile 

Version-Release number of selected component (if applicable):
puddle[2.1/2013-11-05.1]
rubygem-openshift-origin-controller-1.17.0-1.git.45.a39c0ef.el6op.noarch

How reproducible:
always

Steps to Reproduce:
1.setup a multi-nodes env; add node1 to district1 with small profile, add node3 to district2 with medium profile.
2.create a app with small gearsize
#rhc app create testapp php -g small
3.move the app across district
#oo-admin-move --gear_uuid $UUID -i node3

Actual results:
Output:

DEBUG: Fixing DNS and mongo for gear 'testapp' after move
DEBUG: Changing server identity of 'testapp' from 'node2.ose-move.test.com' to 'node3.ose-move.test.com'
DEBUG: Moving failed.  Rolling back gear 'testapp' in 'testapp' with delete on 'node3.ose-move.test.com'
/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-controller-1.17.0/app/models/group_instance.rb:143:in `[]=': string not matched (IndexError)
        from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-controller-1.17.0/app/models/group_instance.rb:143:in `set_group_override'
        from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-controller-1.17.0/app/models/group_instance.rb:66:in `gear_size='
        from /opt/rh/ruby193/root/usr/share/gems/gems/mongoid-3.1.4/lib/mongoid/relations/proxy.rb:143:in `method_missing'
        from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.17.0/lib/openshift/mcollective_application_container_proxy.rb:1801:in `move_gear_post'
        from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.17.0/lib/openshift/mcollective_application_container_proxy.rb:1966:in `move_gear'
        from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.17.0/lib/openshift/mcollective_application_container_proxy.rb:1893:in `block in move_gear_secure'
        from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-controller-1.17.0/app/models/application.rb:1591:in `run_in_application_lock'
        from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.17.0/lib/openshift/mcollective_application_container_proxy.rb:1892:in `move_gear_secure'
        from /usr/sbin/oo-admin-move:112:in `<main>'

Expected results:
Should move the app successfully.

Additional info:
Comment 2 zhaozhanqi 2013-11-27 04:06:54 EST
Tested this issue on devenv_4076

It still have a error for move a non-scale app across district with a different gear size, this feature should be supported for non-scale app. should be moved successfully.


 oo-admin-move --gear_uuid 5295b3da24987bbdc4000030 -i ip-10-80-190-124 --change_district
URL: http://z4-zqd.dev.rhcloud.com
Login: zzhao@redhat.com
App UUID: 5295b3da24987bbdc4000030
Gear UUID: 5295b3da24987bbdc4000030
DEBUG: Source district uuid: 5295b34824987b03e2000001
DEBUG: Destination district uuid: 529563a624987b27b8000001
DEBUG: Getting existing app 'z4' status before moving
DEBUG: Gear component 'php-5.3' was running
DEBUG: Stopping existing app cartridge 'php-5.3' before moving
DEBUG: Force stopping existing app cartridge 'php-5.3' before moving
/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.18.0/lib/openshift/mcollective_application_container_proxy.rb:287:in `reserve_uid': uid could not be reserved in target district '529563a624987b27b8000001'.  Please ensure the target district has available capacity. (OpenShift::OOException)
	from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.18.0/lib/openshift/mcollective_application_container_proxy.rb:1941:in `move_gear'
	from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.18.0/lib/openshift/mcollective_application_container_proxy.rb:1895:in `block in move_gear_secure'
	from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-controller-1.18.0/app/models/application.rb:1577:in `run_in_application_lock'
	from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.18.0/lib/openshift/mcollective_application_container_proxy.rb:1894:in `move_gear_secure'
	from /usr/sbin/oo-admin-move:112:in `<main>'
Comment 3 Brenton Leanhardt 2013-12-02 10:26:53 EST
It looks like you're testing this against Online.  Am I correct?  We should clone the bug there if so.
Comment 4 Ma xiaoqiang 2013-12-02 19:43:37 EST
You have tested it against Online, right?
Comment 5 zhaozhanqi 2013-12-02 20:59:58 EST
yes, above is my testing result against Online. OK, I'll create a new bug if this issue still exist on the latest online env
Comment 6 Brenton Leanhardt 2014-01-20 09:22:43 EST
What was the result from testing against Online?
Comment 7 Ma xiaoqiang 2014-01-20 22:04:35 EST
The problem can not be reproduced on devenv_4247.
https://bugzilla.redhat.com/show_bug.cgi?id=1028364, this bug has been verified.
Comment 8 Brenton Leanhardt 2014-01-22 10:39:05 EST
The error in Comment #2 is different from the error in the description of the bug (and the upstream bug).

Are you certain the destination has capacity?

`reserve_uid': uid could not be reserved in target district '529563a624987b27b8000001'.  Please ensure the target district has available capacity.
Comment 9 zhaozhanqi 2014-01-22 22:35:43 EST
yes, comment #2 is the destination has capacity situation, it is different issue with this bug. sorry for the confusion. This bug can be verified.
Comment 10 Brenton Leanhardt 2014-01-23 09:01:10 EST
Let us know what to do with this bug.  Thanks!
Comment 11 Ma xiaoqiang 2014-01-23 21:00:14 EST
check it on puddle [2.0.3/2014-01-22.1]
#oo-admin-ctl-district 
{"_id"=>"52e1bc08c9a85c28c5000001",
 "active_server_identities_size"=>1,
 "available_capacity"=>5999,
 "available_uids"=>"<5999 uids hidden>",
 "created_at"=>2014-01-24 01:04:08 UTC,
 "gear_size"=>"small",
 "max_capacity"=>6000,
 "max_uid"=>6999,
 "name"=>"dist1",
 "server_identities"=>[{"name"=>"broker.osev2-auto.com.cn", "active"=>true}],
 "updated_at"=>2014-01-24 01:05:48 UTC,
 "uuid"=>"52e1bc08c9a85c28c5000001"}


{"_id"=>"52e1bc1cc9a85cf4f7000001",
 "active_server_identities_size"=>1,
 "available_capacity"=>6000,
 "available_uids"=>"<6000 uids hidden>",
 "created_at"=>2014-01-24 01:04:28 UTC,
 "gear_size"=>"medium",
 "max_capacity"=>6000,
 "max_uid"=>6999,
 "name"=>"dist2",
 "server_identities"=>[{"name"=>"node1.osev2-auto.com.cn", "active"=>true}],
 "updated_at"=>2014-01-24 01:06:11 UTC,
 "uuid"=>"52e1bc1cc9a85cf4f7000001"}
# oo-admin-move --gear_uuid 52e1bc92c9a85c6f00000e2f -i node1.osev2-auto.com.cn
# oo-admin-move --gear_uuid 52e1bc92c9a85c6f00000e2f -i broker.osev2-auto.com.cn

Move successfully.It can be can be verified.
Comment 12 Luke Meyer 2014-01-30 14:20:38 EST
So, I just want to be clear here. Is this something that was ever actually a problem in OSE 2.0.0, and has been fixed since? Or is this a CLOSED WORKSFORME bug?
Comment 13 Brenton Leanhardt 2014-01-30 14:37:33 EST
I'm like 99% sure this magically fixed by some other change we put in since the bug was filed.  I'm fine with closing it as WORKSFORME and letting QE re-open if we're wrong.

Note You need to log in before you can comment on or make changes to this bug.