| Summary: | Fail to move a non-scalable application across district with different size profile | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Ma xiaoqiang <xiama> |
| Component: | Node | Assignee: | Luke Meyer <lmeyer> |
| Status: | CLOSED WORKSFORME | QA Contact: | libra bugs <libra-bugs> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 2.0.0 | CC: | bleanhar, jialiu, libra-onpremise-devel, xiama, xtian |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2014-01-30 19:37:33 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
Tested this issue on devenv_4076 It still have a error for move a non-scale app across district with a different gear size, this feature should be supported for non-scale app. should be moved successfully. oo-admin-move --gear_uuid 5295b3da24987bbdc4000030 -i ip-10-80-190-124 --change_district URL: http://z4-zqd.dev.rhcloud.com Login: zzhao App UUID: 5295b3da24987bbdc4000030 Gear UUID: 5295b3da24987bbdc4000030 DEBUG: Source district uuid: 5295b34824987b03e2000001 DEBUG: Destination district uuid: 529563a624987b27b8000001 DEBUG: Getting existing app 'z4' status before moving DEBUG: Gear component 'php-5.3' was running DEBUG: Stopping existing app cartridge 'php-5.3' before moving DEBUG: Force stopping existing app cartridge 'php-5.3' before moving /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.18.0/lib/openshift/mcollective_application_container_proxy.rb:287:in `reserve_uid': uid could not be reserved in target district '529563a624987b27b8000001'. Please ensure the target district has available capacity. (OpenShift::OOException) from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.18.0/lib/openshift/mcollective_application_container_proxy.rb:1941:in `move_gear' from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.18.0/lib/openshift/mcollective_application_container_proxy.rb:1895:in `block in move_gear_secure' from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-controller-1.18.0/app/models/application.rb:1577:in `run_in_application_lock' from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.18.0/lib/openshift/mcollective_application_container_proxy.rb:1894:in `move_gear_secure' from /usr/sbin/oo-admin-move:112:in `<main>' It looks like you're testing this against Online. Am I correct? We should clone the bug there if so. You have tested it against Online, right? yes, above is my testing result against Online. OK, I'll create a new bug if this issue still exist on the latest online env What was the result from testing against Online? The problem can not be reproduced on devenv_4247. https://bugzilla.redhat.com/show_bug.cgi?id=1028364, this bug has been verified. The error in Comment #2 is different from the error in the description of the bug (and the upstream bug). Are you certain the destination has capacity? `reserve_uid': uid could not be reserved in target district '529563a624987b27b8000001'. Please ensure the target district has available capacity. yes, comment #2 is the destination has capacity situation, it is different issue with this bug. sorry for the confusion. This bug can be verified. Let us know what to do with this bug. Thanks! check it on puddle [2.0.3/2014-01-22.1]
#oo-admin-ctl-district
{"_id"=>"52e1bc08c9a85c28c5000001",
"active_server_identities_size"=>1,
"available_capacity"=>5999,
"available_uids"=>"<5999 uids hidden>",
"created_at"=>2014-01-24 01:04:08 UTC,
"gear_size"=>"small",
"max_capacity"=>6000,
"max_uid"=>6999,
"name"=>"dist1",
"server_identities"=>[{"name"=>"broker.osev2-auto.com.cn", "active"=>true}],
"updated_at"=>2014-01-24 01:05:48 UTC,
"uuid"=>"52e1bc08c9a85c28c5000001"}
{"_id"=>"52e1bc1cc9a85cf4f7000001",
"active_server_identities_size"=>1,
"available_capacity"=>6000,
"available_uids"=>"<6000 uids hidden>",
"created_at"=>2014-01-24 01:04:28 UTC,
"gear_size"=>"medium",
"max_capacity"=>6000,
"max_uid"=>6999,
"name"=>"dist2",
"server_identities"=>[{"name"=>"node1.osev2-auto.com.cn", "active"=>true}],
"updated_at"=>2014-01-24 01:06:11 UTC,
"uuid"=>"52e1bc1cc9a85cf4f7000001"}
# oo-admin-move --gear_uuid 52e1bc92c9a85c6f00000e2f -i node1.osev2-auto.com.cn
# oo-admin-move --gear_uuid 52e1bc92c9a85c6f00000e2f -i broker.osev2-auto.com.cn
Move successfully.It can be can be verified.
So, I just want to be clear here. Is this something that was ever actually a problem in OSE 2.0.0, and has been fixed since? Or is this a CLOSED WORKSFORME bug? I'm like 99% sure this magically fixed by some other change we put in since the bug was filed. I'm fine with closing it as WORKSFORME and letting QE re-open if we're wrong. |
Description of problem: Fail to move a non-scalable application across district with different size profile Version-Release number of selected component (if applicable): puddle[2.1/2013-11-05.1] rubygem-openshift-origin-controller-1.17.0-1.git.45.a39c0ef.el6op.noarch How reproducible: always Steps to Reproduce: 1.setup a multi-nodes env; add node1 to district1 with small profile, add node3 to district2 with medium profile. 2.create a app with small gearsize #rhc app create testapp php -g small 3.move the app across district #oo-admin-move --gear_uuid $UUID -i node3 Actual results: Output: DEBUG: Fixing DNS and mongo for gear 'testapp' after move DEBUG: Changing server identity of 'testapp' from 'node2.ose-move.test.com' to 'node3.ose-move.test.com' DEBUG: Moving failed. Rolling back gear 'testapp' in 'testapp' with delete on 'node3.ose-move.test.com' /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-controller-1.17.0/app/models/group_instance.rb:143:in `[]=': string not matched (IndexError) from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-controller-1.17.0/app/models/group_instance.rb:143:in `set_group_override' from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-controller-1.17.0/app/models/group_instance.rb:66:in `gear_size=' from /opt/rh/ruby193/root/usr/share/gems/gems/mongoid-3.1.4/lib/mongoid/relations/proxy.rb:143:in `method_missing' from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.17.0/lib/openshift/mcollective_application_container_proxy.rb:1801:in `move_gear_post' from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.17.0/lib/openshift/mcollective_application_container_proxy.rb:1966:in `move_gear' from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.17.0/lib/openshift/mcollective_application_container_proxy.rb:1893:in `block in move_gear_secure' from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-controller-1.17.0/app/models/application.rb:1591:in `run_in_application_lock' from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.17.0/lib/openshift/mcollective_application_container_proxy.rb:1892:in `move_gear_secure' from /usr/sbin/oo-admin-move:112:in `<main>' Expected results: Should move the app successfully. Additional info: