Bug 927181 - Can't move application with alias across district successfully
Summary: Can't move application with alias across district successfully
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: OKD
Classification: Red Hat
Component: Pod
Version: 2.x
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: ---
Assignee: Rajat Chopra
QA Contact: libra bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-03-25 09:43 UTC by Rony Gong 🔥
Modified: 2015-05-15 02:17 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-04-16 16:09:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
development.log (28.63 KB, text/plain)
2013-03-25 09:43 UTC, Rony Gong 🔥
no flags Details
mcollective.log (12.16 KB, text/plain)
2013-03-28 07:46 UTC, Rony Gong 🔥
no flags Details

Description Rony Gong 🔥 2013-03-25 09:43:54 UTC
Created attachment 715942 [details]
development.log

Description of problem:
Can't move application with alias across district successfully,
Show errors:
2013-03-25 05:02:06.876 [DEBUG] Dalli::Server#connect localhost:11212 (pid:30809)
2013-03-25 05:02:06.885 [INFO ] localhost:11212 failed (count: 3) (pid:30809)
2013-03-25 05:02:06.886 [DEBUG] localhost:11212 is still down (for 59.563 seconds now) (pid:30809)
2013-03-25 05:02:06.895 [DEBUG] down_retry_delay not reached for localhost:11212 (0.991 seconds left) (pid:30809)
2013-03-25 05:02:06.901 [DEBUG] down_retry_delay not reached for localhost:11212 (0.984 seconds left) (pid:30809)
2013-03-25 05:02:06.906 [DEBUG] DEBUG: Performing cartridge level move for 'ruby-1.8' on ip-10-147-199-177 (pid:30809)
Please see attachment for detail log

But could move application with alias within district successfully

Version-Release number of selected component (if applicable):
devenv_2993

How reproducible:
Always

Steps to Reproduce:
1.Setup multi node env, 2 districts and add nodes to them.
2.Create application and add alias to it
3.Move this application across district.
  
Actual results:
Move Faild:
[root@ip-10-165-5-176 .last_access]# oo-admin-move --gear_uuid b14b549494f611e28b4522000aa505b0 -i ip-10-147-199-177 --allow_change_district
URL: http://redmine-qgong1.dev.rhcloud.com
Login: qgong
App UUID: 514fbb5b737a7d15750001e7
Gear UUID: 514fbb5b737a7d15750001e7
DEBUG: Source district uuid: 8972072294ee11e2918a22000aa505b0
DEBUG: Destination district uuid: 51500bbc737a7d0c7c000001
DEBUG: Getting existing app 'redmine' status before moving
DEBUG: Gear component 'ruby-1.8' was running
DEBUG: Stopping existing app cartridge 'mysql-5.1' before moving
DEBUG: Performing cartridge level pre-move for embedded mysql-5.1 for 'redmine' on ip-10-152-179-118
DEBUG: Stopping existing app cartridge 'ruby-1.8' before moving
DEBUG: Force stopping existing app cartridge 'ruby-1.8' before moving
DEBUG: Reserved uid '1004' on district: '51500bbc737a7d0c7c000001'
DEBUG: Creating new account for gear 'redmine' on ip-10-147-199-177
DEBUG: Moving content for app 'redmine', gear 'redmine' to ip-10-147-199-177
Identity added: /var/www/openshift/broker/config/keys/rsync_id_rsa (/var/www/openshift/broker/config/keys/rsync_id_rsa)
Agent pid 31449
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 31449 killed;
DEBUG: Performing cartridge level move for 'ruby-1.8' on ip-10-147-199-177
DEBUG: Performing cartridge level move for embedded mysql-5.1 for 'redmine' on ip-10-147-199-177
DEBUG: Performing cartridge level post-move for embedded mysql-5.1 for 'redmine' on ip-10-147-199-177
DEBUG: Starting cartridge 'ruby-1.8' in 'redmine' after move on ip-10-147-199-177
DEBUG: Starting cartridge 'mysql-5.1' in 'redmine' after move on ip-10-147-199-177
DEBUG: Fixing DNS and mongo for gear 'redmine' after move
DEBUG: Changing server identity of 'redmine' from 'ip-10-152-179-118' to 'ip-10-147-199-177'
DEBUG: Moving failed.  Rolling back gear 'redmine' 'redmine' with remove-httpd-proxy on 'ip-10-147-199-177'
DEBUG: Moving failed.  Rolling back gear 'redmine' in 'redmine' with destroy on 'ip-10-147-199-177'
DEBUG: Performing cartridge level post-move for embedded mysql-5.1 for 'redmine' on ip-10-152-179-118
/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.6.4/lib/openshift/mcollective_application_container_proxy.rb:2522:in `parse_result': Node execution failure (error getting result from node).  If the problem persists please contact Red Hat support. (OpenShift::NodeException)
	from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.6.4/lib/openshift/mcollective_application_container_proxy.rb:1229:in `add_alias'
	from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.6.4/lib/openshift/mcollective_application_container_proxy.rb:1612:in `block in move_gear_post'
	from /opt/rh/ruby193/root/usr/share/gems/gems/mongoid-3.0.21/lib/mongoid/relations/proxy.rb:143:in `each'
	from /opt/rh/ruby193/root/usr/share/gems/gems/mongoid-3.0.21/lib/mongoid/relations/proxy.rb:143:in `method_missing'
	from /opt/rh/ruby193/root/usr/share/gems/gems/mongoid-3.0.21/lib/mongoid/relations/embedded/many.rb:396:in `method_missing'
	from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.6.4/lib/openshift/mcollective_application_container_proxy.rb:1611:in `move_gear_post'
	from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.6.4/lib/openshift/mcollective_application_container_proxy.rb:1788:in `move_gear'
	from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.6.4/lib/openshift/mcollective_application_container_proxy.rb:1699:in `block in move_gear_secure'
	from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-controller-1.6.5/app/models/application.rb:1091:in `run_in_application_lock'
	from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.6.4/lib/openshift/mcollective_application_container_proxy.rb:1698:in `move_gear_secure'
	from /usr/sbin/oo-admin-move:110:in `<main>'


Expected results:
Move successfully and alias still could work after move.

Additional info:

Comment 1 Rajat Chopra 2013-03-26 00:35:35 UTC
Lowering the severity as we do not move across districts in production.

Meanwhile, is there mcollective log available at the target node while this error happens?

Comment 2 Rony Gong 🔥 2013-03-28 07:46:41 UTC
Created attachment 717469 [details]
mcollective.log

Comment 3 Rony Gong 🔥 2013-03-28 07:51:04 UTC
@Rajat, Parts of mcollective.log, for detail you could see above attachment.
D, [2013-03-28T03:40:39.751426 #3756] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
D, [2013-03-28T03:40:39.751704 #3756] DEBUG -- : runnerstats.rb:38:in `validated' Incrementing validated stat
W, [2013-03-28T03:40:39.751984 #3756]  WARN -- : runner.rb:71:in `rescue in block in run' Failed to handle message: undefined class/module Alias - ArgumentError

W, [2013-03-28T03:40:39.752116 #3756]  WARN -- : runner.rb:72:in `rescue in block in run' /opt/rh/ruby193/root/usr/libexec/mcollective/mcollective/security/psk.rb:27:in `load'
	/opt/rh/ruby193/root/usr/libexec/mcollective/mcollective/security/psk.rb:27:in `decodemsg'
	/opt/rh/ruby193/root/usr/share/ruby/mcollective/message.rb:182:in `decode!'
	/opt/rh/ruby193/root/usr/share/ruby/mcollective/runner.rb:119:in `receive'
	/opt/rh/ruby193/root/usr/share/ruby/mcollective/runner.rb:52:in `block in run'
	/opt/rh/ruby193/root/usr/share/ruby/mcollective/runner.rb:50:in `loop'
	/opt/rh/ruby193/root/usr/share/ruby/mcollective/runner.rb:50:in `run'
	/opt/rh/ruby193/root/usr/share/ruby/mcollective/unix_daemon.rb:30:in `block in daemonize_runner'
	/opt/rh/ruby193/root/usr/share/ruby/mcollective/unix_daemon.rb:13:in `block in daemonize'
	/opt/rh/ruby193/root/usr/share/ruby/mcollective/unix_daemon.rb:5:in `fork'
	/opt/rh/ruby193/root/usr/share/ruby/mcollective/unix_daemon.rb:5:in `daemonize'
	/opt/rh/ruby193/root/usr/share/ruby/mcollective/unix_daemon.rb:20:in `daemonize_runner'
	/usr/sbin/mcollectived:43:in `<main>'

Comment 4 Dan McPherson 2013-04-16 16:09:49 UTC
We no longer support this scenario.


Note You need to log in before you can comment on or make changes to this bug.