Description of problem: Jenkins build an existing app with hot_deploy still restart the app after server upgrade. Version-Release number of selected component (if applicable): STG(devenv_stage_582) How reproducible: always Steps to Reproduce: 1.Create some apps with jenkins client embedded. 2.Upgrade and migrate the server. 3.Add hot_deploy marker and do deploy. Actual results: App restart during the deploy ( pid changed ). Expected results: App should not restart with hot_deploy marker. Additional info: Newly created apps will not restart with hot_deploy marker. Build command of apps before server upgrade: alias rsync="rsync --delete-after -azO -e '$GIT_SSH'" upstream_ssh="52734bbc2587c870b4000050@jbossews20-${OPENSHIFT_NAMESPACE}.stg.rhcloud.com" # Sync any libraries rsync $upstream_ssh:~/.m2/ ~/.m2/ # Build/update libs and run user pre_build and build gear build # Run tests here # Deploy new build # Stop app $GIT_SSH $upstream_ssh 'gear stop --conditional' # Push content back to application rsync ~/.m2/ $upstream_ssh:~/.m2/ rsync $WORKSPACE/webapps/. $upstream_ssh:'${OPENSHIFT_REPO_DIR}webapps/' rsync $WORKSPACE/.openshift/ $upstream_ssh:'${OPENSHIFT_REPO_DIR}.openshift/' # Configure / start app $GIT_SSH $upstream_ssh 'gear remotedeploy' Build command of apps after server upgrade: source $OPENSHIFT_CARTRIDGE_SDK_BASH alias rsync="rsync --delete-after -az -e '$GIT_SSH'" upstream_ssh="528f4af9dbd93c69a700048d@ews2-${OPENSHIFT_NAMESPACE}.stg.rhcloud.com" # remove previous metadata, if any rm -f $OPENSHIFT_HOMEDIR/app-deployments/current/metadata.json if ! marker_present "force_clean_build"; then # don't fail if these rsyncs fail set +e rsync $upstream_ssh:'$OPENSHIFT_BUILD_DEPENDENCIES_DIR' $OPENSHIFT_BUILD_DEPENDENCIES_DIR rsync $upstream_ssh:'$OPENSHIFT_DEPENDENCIES_DIR' $OPENSHIFT_DEPENDENCIES_DIR set -e fi # Build/update libs and run user pre_build and build gear build # Run tests here # Deploy new build # Stop app $GIT_SSH $upstream_ssh "gear stop --conditional --exclude-web-proxy --git-ref $GIT_COMMIT" deployment_dir=`$GIT_SSH $upstream_ssh 'gear create-deployment-dir'` # Push content back to application rsync $OPENSHIFT_HOMEDIR/app-deployments/current/metadata.json $upstream_ssh:app-deployments/$deployment_dir/metadata.json rsync --exclude .git $WORKSPACE/ $upstream_ssh:app-root/runtime/repo/ rsync $OPENSHIFT_BUILD_DEPENDENCIES_DIR $upstream_ssh:app-root/runtime/build-dependencies/ rsync $OPENSHIFT_DEPENDENCIES_DIR $upstream_ssh:app-root/runtime/dependencies/ # Configure / start app $GIT_SSH $upstream_ssh "gear remotedeploy --deployment-datetime $deployment_dir"
You need to recreate your jenkins job after upgrading to the new deployments release if you want full functionality (e.g. hot deploy). https://www.openshift.com/blogs/online-release-for-november-2013
We tried this on STG (devenv-stage_593) with following steps: 1. Do some change and git push existing app with hot_deploy marker. 2. Check build commands of this app used in jenkins server. 3. Delete the project of this app in jenkins server. 4. Remove jenkins-client from the app. 5. Re-add jenkins-client to the app. 6. Check build commands of this app used in jenkins server. 7. Create a new app with jenkins client embedded. 8. Check build commands of the newly created app in jenkins server. 9. Do some change and git push with hot_deploy marker. Step 2 and step 6 show the same result, which is different with step 8. App restarted during step 1 ( pid changed ), step 9 does not. Step 2 and step 6 show: alias rsync="rsync --delete-after -azO -e '$GIT_SSH'" upstream_ssh="527319be2587c89a0000020f@cjbossas7sspsqlcronmin2gearsjkns-${OPENSHIFT_NAMESPACE}.stg.rhcloud.com" # Sync any libraries rsync $upstream_ssh:~/.m2/ ~/.m2/ # Build/update libs and run user pre_build and build gear build # Run tests here # Deploy new build # Stop app $GIT_SSH $upstream_ssh 'gear stop --conditional' # Push content back to application rsync ~/.m2/ $upstream_ssh:~/.m2/ rsync $WORKSPACE/deployments/ $upstream_ssh:'${OPENSHIFT_REPO_DIR}deployments/' rsync $WORKSPACE/.openshift/ $upstream_ssh:'${OPENSHIFT_REPO_DIR}.openshift/' # Configure / start app $GIT_SSH $upstream_ssh 'gear remotedeploy' Step 8 show: source $OPENSHIFT_CARTRIDGE_SDK_BASH alias rsync="rsync --delete-after -az -e '$GIT_SSH'" upstream_ssh="52932510dbd93c28ca0000e7@as7-${OPENSHIFT_NAMESPACE}.stg.rhcloud.com" # remove previous metadata, if any rm -f $OPENSHIFT_HOMEDIR/app-deployments/current/metadata.json if ! marker_present "force_clean_build"; then # don't fail if these rsyncs fail set +e rsync $upstream_ssh:'$OPENSHIFT_BUILD_DEPENDENCIES_DIR' $OPENSHIFT_BUILD_DEPENDENCIES_DIR rsync $upstream_ssh:'$OPENSHIFT_DEPENDENCIES_DIR' $OPENSHIFT_DEPENDENCIES_DIR set -e fi # Build/update libs and run user pre_build and build gear build # Run tests here # Deploy new build # Stop app $GIT_SSH $upstream_ssh "gear stop --conditional --exclude-web-proxy --git-ref $GIT_COMMIT" deployment_dir=`$GIT_SSH $upstream_ssh 'gear create-deployment-dir'` # Push content back to application rsync $OPENSHIFT_HOMEDIR/app-deployments/current/metadata.json $upstream_ssh:app-deployments/$deployment_dir/metadata.json rsync --exclude .git $WORKSPACE/ $upstream_ssh:app-root/runtime/repo/ rsync $OPENSHIFT_BUILD_DEPENDENCIES_DIR $upstream_ssh:app-root/runtime/build-dependencies/ rsync $OPENSHIFT_DEPENDENCIES_DIR $upstream_ssh:app-root/runtime/dependencies/ # Configure / start app $GIT_SSH $upstream_ssh "gear remotedeploy --deployment-datetime $deployment_dir"
This is occurring because the primary cartridge's jenkins_shell_command file still exists and is the old version of the file. It is taking precedence: https://github.com/openshift/origin-server/blob/956fddd12a429c0109ec004e819c5cc8dda81f92/cartridges/openshift-origin-cartridge-jenkins-client/bin/install#L20-L22
*** Bug 1038165 has been marked as a duplicate of this bug. ***
Is something like this what you have in mind? https://github.com/openshift/origin-server/pull/4424
Commits pushed to master at https://github.com/openshift/li https://github.com/openshift/li/commit/acb1b1c093e492ba88b87c7fba09366261e6e084 Bug 1033581 - Zend's jenkins_shell_command.erb was effectively the same as the stock jenkins-client script https://github.com/openshift/li/commit/954fdad2bd1a632bda29d5e69860bf3f6a6b489d Bug 1033581 - minor update and adding comment
Commit pushed to master at https://github.com/openshift/origin-server https://github.com/openshift/origin-server/commit/46030ceadd18a9f309cdf6aa0182584c900b8cf2 Bug 1033581 - Adding upgrade logic to remove the unneeded jenkins_shell_command files This is the squashed version of https://github.com/openshift/origin-server/pull/4424
Can this bug be moved to ON_QA?
Looks like Brenton moved it :-)
Verified on instance which is migrated from devenv-stage_689 to devenv_4375. Steps: 1. Launch a instace. 2. Change script in /usr/libexec/openshift/cartridges/jenkins-client/metadata/jenkins_shell_command and restart mcollective service. 3. Create jenkins server and apps with jenkins client embedded. 4. Check the script used by the apps. 5. Do server migration. 6. Check the script used by the apps. 7. Remove jenkins client from the app and delete corresponding project in jenkins server. 8. Re-add jenkins-client to the app. 9. Check the script used by the apps. Step 9 use the newer version scripts. So move bug to fixed/verified.