Bug 872026
| Summary: | Can't move zend app success across district, without district and from no district node to district node | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | OKD | Reporter: | Rony Gong 🔥 <qgong> | ||||||
| Component: | Pod | Assignee: | Lili Nader <lnader> | ||||||
| Status: | CLOSED WONTFIX | QA Contact: | libra bugs <libra-bugs> | ||||||
| Severity: | low | Docs Contact: | |||||||
| Priority: | medium | ||||||||
| Version: | 2.x | CC: | dmcphers, mfisher, mmcgrath | ||||||
| Target Milestone: | --- | ||||||||
| Target Release: | --- | ||||||||
| Hardware: | Unspecified | ||||||||
| OS: | Unspecified | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2013-04-16 16:50:57 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Attachments: |
|
||||||||
|
Description
Rony Gong 🔥
2012-11-01 03:07:02 UTC
Also happened can't move zend app from no district node to district node. Also happened can't move zend app without districts Can't add a second node to district oo-admin-ctl-district -c add-node -n dis2 -i ip-10-124-246-39 ERROR OUTPUT: Node with server identity: ip-10-124-246-39 could not be found even though I can ping the server from the 1st node ping ip-10-124-246-39 PING ip-10-124-246-39.ec2.internal (10.124.246.39) 56(84) bytes of data. 64 bytes from ip-10-124-246-39.ec2.internal (10.124.246.39): icmp_seq=1 ttl=63 time=0.425 ms 64 bytes from ip-10-124-246-39.ec2.internal (10.124.246.39): icmp_seq=2 ttl=63 time=0.414 ms 64 bytes from ip-10-124-246-39.ec2.internal (10.124.246.39): icmp_seq=3 ttl=63 time=0.407 ms 64 bytes from ip-10-124-246-39.ec2.internal (10.124.246.39): icmp_seq=4 ttl=63 time=0.374 ms Hi Lili, Did you do the multi node setup? The only way add-node will work is when mco-ping can see the second node: https://engineering.redhat.com/trac/Libra/wiki/Multi_Node_DevEnv Basically the broker needs to have knowledge of all the nodes. And that's the system oo-admin-ctl-district should be run from. Works for me for all the below scenrios: Created a 2 node environment. scenario 1: - created 2 districts and put each node in different districts. - created app on dis1 and then moved to node2 (on dis2) scenatio 2: - removed node2 from dis2 - created another app in dis1 and moved it to node2 (no district) scenario 3: - removed node1 from dis1 - destroyed both districts - created app on node1 and moved it to node2 NOTE: In all the above scenarios I created a new app. There's known issue about trying to move the same app back and forth between the same nodes. What's the known issue? Tested on devenv_2484 Only can't move zend app from node(without district) to node(within district) [root@ip-10-90-246-99 openshift]# oo-admin-move --gear_uuid 0d780ac934634b0aa76e1a50c1b7fba7 -i domU-12-31-38-04-7A-E7 --allow_change_district Mocha deprecation warning: Test::Unit or MiniTest must be loaded *before* Mocha. Mocha deprecation warning: If you're integrating with another test library, you should probably require 'mocha_standalone' instead of 'mocha' URL: http://qzend-qgong5.dev.rhcloud.com Login: qgong App UUID: 0d780ac934634b0aa76e1a50c1b7fba7 Gear UUID: 0d780ac934634b0aa76e1a50c1b7fba7 DEBUG: Source district uuid: NONE DEBUG: Destination district uuid: 602b2f3a24a140b68d0c28d4407a422a DEBUG: Getting existing app 'qzend' status before moving DEBUG: Gear component 'zend-5.6' was running DEBUG: Stopping existing app cartridge 'zend-5.6' before moving DEBUG: Force stopping existing app cartridge 'zend-5.6' before moving DEBUG: Reserved uid '1004' on district: '602b2f3a24a140b68d0c28d4407a422a' DEBUG: Creating new account for gear 'qzend' on domU-12-31-38-04-7A-E7 DEBUG: Moving content for app 'qzend', gear 'qzend' to domU-12-31-38-04-7A-E7 Identity added: /var/www/openshift/broker/config/keys/rsync_id_rsa (/var/www/openshift/broker/config/keys/rsync_id_rsa) Agent pid 8550 unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 8550 killed; DEBUG: Performing cartridge level move for 'zend-5.6' on domU-12-31-38-04-7A-E7 DEBUG: Starting cartridge 'zend-5.6' in 'qzend' after move on domU-12-31-38-04-7A-E7 DEBUG: Moving failed. Rolling back gear 'qzend' 'qzend' with remove-httpd-proxy on 'domU-12-31-38-04-7A-E7' DEBUG: Moving failed. Rolling back gear 'qzend' in 'qzend' with destroy on 'domU-12-31-38-04-7A-E7' /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.1.3/lib/openshift/mcollective_application_container_proxy.rb:1265:in `run_cartridge_command': Node execution failure (invalid exit code from node). If the problem persists please contact Red Hat support. (OpenShift::NodeException) from /var/www/openshift/broker/lib/express/broker/mcollective_ext.rb:12:in `run_cartridge_command' from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.1.3/lib/openshift/mcollective_application_container_proxy.rb:673:in `block in move_gear_post' from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.1.3/lib/openshift/mcollective_application_container_proxy.rb:665:in `each' from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.1.3/lib/openshift/mcollective_application_container_proxy.rb:665:in `move_gear_post' from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.1.3/lib/openshift/mcollective_application_container_proxy.rb:814:in `move_gear' from /usr/sbin/oo-admin-move:111:in `<main>' Created attachment 646193 [details]
newest developement.log
*** Bug 880107 has been marked as a duplicate of this bug. *** Retested on devenv_2518
Can't move zend app success across district successfully, caused by start this app when move
Permission denied: make_sock: could not bind to address 127.1.245.129:16083\nno listening sockets available, shutting down\nUnable to open logs\nFailed to start zend-5.6\n", :data=>{:exitcode=>121, :output=>"Starting Deployment \e[32m[OK]\e[0m\n[26.11.2012 03:14:58 SYSTEM] watchdog for zdd is running.
[root@ip-10-112-221-246 ~]# oo-admin-move --gear_uuid 7146a72df90d4af8b69deb894a8ad572 -i ip-10-152-131-32 --allow_change_district
URL: http://qzend2-qgong12.dev.rhcloud.com
Login: qgong
App UUID: 7146a72df90d4af8b69deb894a8ad572
Gear UUID: 7146a72df90d4af8b69deb894a8ad572
DEBUG: Source district uuid: 6848ca01690e427cb0c8cca5ca122418
DEBUG: Destination district uuid: 9d457eeccaf445a4a3fae8d59d0579ea
DEBUG: Getting existing app 'qzend2' status before moving
DEBUG: Gear component 'zend-5.6' was running
DEBUG: Stopping existing app cartridge 'zend-5.6' before moving
DEBUG: Force stopping existing app cartridge 'zend-5.6' before moving
DEBUG: Reserved uid '1005' on district: '9d457eeccaf445a4a3fae8d59d0579ea'
DEBUG: Creating new account for gear 'qzend2' on ip-10-152-131-32
DEBUG: Moving content for app 'qzend2', gear 'qzend2' to ip-10-152-131-32
Identity added: /var/www/openshift/broker/config/keys/rsync_id_rsa (/var/www/openshift/broker/config/keys/rsync_id_rsa)
Agent pid 18000
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 18000 killed;
DEBUG: Performing cartridge level move for 'zend-5.6' on ip-10-152-131-32
DEBUG: Starting cartridge 'zend-5.6' in 'qzend2' after move on ip-10-152-131-32
DEBUG: Moving failed. Rolling back gear 'qzend2' 'qzend2' with remove-httpd-proxy on 'ip-10-152-131-32'
DEBUG: Moving failed. Rolling back gear 'qzend2' in 'qzend2' with destroy on 'ip-10-152-131-32'
/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.2.1/lib/openshift/mcollective_application_container_proxy.rb:1190:in `run_cartridge_command': Node execution failure (invalid exit code from node). If the problem persists please contact Red Hat support. (OpenShift::NodeException)
from /var/www/openshift/broker/lib/express/broker/mcollective_ext.rb:12:in `run_cartridge_command'
from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.2.1/lib/openshift/mcollective_application_container_proxy.rb:585:in `block in move_gear_post'
from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.2.1/lib/openshift/mcollective_application_container_proxy.rb:577:in `each'
from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.2.1/lib/openshift/mcollective_application_container_proxy.rb:577:in `move_gear_post'
from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.2.1/lib/openshift/mcollective_application_container_proxy.rb:737:in `move_gear'
from /usr/sbin/oo-admin-move:110:in `<main>'
The cases described in this bug are officially 'not supported' as such we're going to close the bug - bug triage meeting. |