Bug 963650 - Fail to restore a scaling JBossEWS (either 1.0 or 2.0) with MongoDB embedded after scaling it up
Fail to restore a scaling JBossEWS (either 1.0 or 2.0) with MongoDB embedded ...
Status: CLOSED CURRENTRELEASE
Product: OpenShift Online
Classification: Red Hat
Component: Containers (Show other bugs)
2.x
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Dan Mace
libra bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-05-16 06:24 EDT by Zhe Wang
Modified: 2015-05-14 19:18 EDT (History)
1 user (show)

See Also:
Fixed In Version: devenv_3263+
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-06-11 00:04:27 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Zhe Wang 2013-05-16 06:24:54 EDT
Description of problem:
Given a scaling JBossEWS application (either 1.0 or 2.0) with MongoDB embedded, it fails to restore the app from its tarball after scaling it up.

Version-Release number of selected component (if applicable):
devenv_3231

How reproducible:
always

Steps to Reproduce:
1. create a scaling JBossEWS application (either 1.0 or 2.0) with MongoDB embedded
rhc app create sews jbossews-1.0 mongodb-2.2 -s

2. disable its auto scaling
touch <app_repo>/.openshift/markers/disable_auto_scaling

3. push the marker to its remote repo

4. scaling it up with REST API
curl -k -H "Accept:application/json" --user zhewang+1@redhat.com:redhat https://ec2-184-73-146-97.compute-1.amazonaws.com/broker/rest/domains/dev3231tst/applications/sews/events -d event=scale-up -XPOST | python -m json.tool

5. log into this app and write some data to its MongoDB

6. save a tarball of this app
rhc snapshot save sews

7. restore this app with the saved tarball
rhc snapshot restore sews -f sews.tar.ball
  
Actual results:
In step 7, it reports the following error:

Thu May 16 05:47:22 	Creating index: { key: { _id: 1 }, ns: "sews.system.users", name: "_id_" }
/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-node-1.9.1/lib/openshift-origin-node/model/v2_cart_model.rb:976:in `block in do_control_with_directory': Failed to execute: 'control deploy' for /var/lib/openshift/5194a7f58239ec154e000007/haproxy (OpenShift::Utils::ShellExecutionException)
	from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-node-1.9.1/lib/openshift-origin-node/model/v2_cart_model.rb:788:in `process_cartridges'
	from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-node-1.9.1/lib/openshift-origin-node/model/v2_cart_model.rb:949:in `do_control_with_directory'
	from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-node-1.9.1/lib/openshift-origin-node/model/v2_cart_model.rb:811:in `do_control'
	from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-node-1.9.1/lib/openshift-origin-node/model/application_container.rb:522:in `deploy'
	from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-node-1.9.1/lib/openshift-origin-node/model/default_builder.rb:46:in `post_receive'
	from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-node-1.9.1/lib/openshift-origin-node/model/application_container.rb:417:in `post_receive'
	from /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-node-1.9.1/lib/openshift-origin-node/model/application_container.rb:792:in `restore'
	from /usr/bin/gear:277:in `block (2 levels) in <main>'
	from /opt/rh/ruby193/root/usr/share/gems/gems/commander-4.0.3/lib/commander/command.rb:180:in `call'
	from /opt/rh/ruby193/root/usr/share/gems/gems/commander-4.0.3/lib/commander/command.rb:180:in `call'
	from /opt/rh/ruby193/root/usr/share/gems/gems/commander-4.0.3/lib/commander/command.rb:155:in `run'
	from /opt/rh/ruby193/root/usr/share/gems/gems/commander-4.0.3/lib/commander/runner.rb:385:in `run_active_command'
	from /opt/rh/ruby193/root/usr/share/gems/gems/commander-4.0.3/lib/commander/runner.rb:62:in `run!'
	from /opt/rh/ruby193/root/usr/share/gems/gems/commander-4.0.3/lib/commander/delegates.rb:11:in `run!'
	from /opt/rh/ruby193/root/usr/share/gems/gems/commander-4.0.3/lib/commander/import.rb:10:in `block in <top (required)>'
Error in trying to restore snapshot. You can try to restore manually by running:
cat sews.tar.gz | ssh 5194a7f58239ec154e000007@sews-dev3231tst.dev.rhcloud.com 'restore INCLUDE_GIT'


Expected results:
Restoring the MongoDB data of a scaled-up JBossEWS application should be successful.

Additional info:
Comment 1 Dan Mace 2013-05-23 18:33:46 EDT
Seems to be working now in devenv_3263; I suspect it was related to the general scaling sync bugs which were recently resolved. Please re-test.
Comment 2 Zhe Wang 2013-05-24 03:52:10 EDT
Verified in devevn_3268, and the steps are identical to those in Description.

Result:
connected to: 127.0.251.129:27017
Fri May 24 03:50:18 dump/admin/system.users.bson
Fri May 24 03:50:18 	going into namespace [admin.system.users]
1 objects found
Fri May 24 03:50:18 	Creating index: { key: { _id: 1 }, ns: "admin.system.users", name: "_id_" }
Fri May 24 03:50:18 dump/ews/test.bson
Fri May 24 03:50:18 	going into namespace [ews.test]
Fri May 24 03:50:18 	 dropping
1 objects found
Fri May 24 03:50:18 	Creating index: { key: { _id: 1 }, ns: "ews.test", name: "_id_" }
Fri May 24 03:50:18 dump/ews/openshift.bson
Fri May 24 03:50:18 	going into namespace [ews.openshift]
Fri May 24 03:50:18 	 dropping
1 objects found
Fri May 24 03:50:18 	Creating index: { key: { _id: 1 }, ns: "ews.openshift", name: "_id_" }
Fri May 24 03:50:18 dump/ews/system.users.bson
Fri May 24 03:50:18 	going into namespace [ews.system.users]
1 objects found
Fri May 24 03:50:18 	Creating index: { key: { _id: 1 }, ns: "ews.system.users", name: "_id_" }
+ tmp=/var/lib/openshift/3bece25ac44211e2abaa22000a98b80b/jbossews//tmp
+ '[' -d /var/lib/openshift/3bece25ac44211e2abaa22000a98b80b/jbossews//tmp ']'
+ for d in '$tmp/*'
+ '[' -d '/var/lib/openshift/3bece25ac44211e2abaa22000a98b80b/jbossews//tmp/*' ']'

RESULT:
Success

Note You need to log in before you can comment on or make changes to this bug.