Bug 1402125

Summary: podfying cfme: Ensure DB v2_key is always restored on upgrade/redeploy
Product: Red Hat CloudForms Management Engine Reporter: Dafna Ron <dron>
Component: cfme-openshift-appAssignee: Satoe Imaishi <simaishi>
Status: CLOSED CURRENTRELEASE QA Contact: Einat Pacifici <epacific>
Severity: high Docs Contact: Red Hat CloudForms Documentation <cloudforms-docs>
Priority: unspecified    
Version: 5.7.0CC: bazulay, fbladilo, jhardy, jkrocil, simaishi
Target Milestone: GAKeywords: TestBlocker, TestOnly
Target Release: 5.7.0   
Hardware: x86_64   
OS: Linux   
Whiteboard: container
Fixed In Version: 5.7.0.14 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-01-11 20:12:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: Container Management Target Upstream Version:
Bug Depends On:    
Bug Blocks: 1399785    

Description Dafna Ron 2016-12-06 20:38:40 UTC
@simaishi During upgrades or redeployment we need to ensure we restore the DB v2_key along with database.yml in the check_deployment_status function from the persistent volumes.

In the current implementation, check_deployment_status, when it executes an upgrade/redeployment case, relies on the DB v2_key that comes with the upstream image instead of ensuring is using the v2_key from the PV. This logic breaks redeployments/upgrades on builds that actually create a legitimate/unique v2_key during first deployment such as downstream.

The root issue comes from the fact that we actually supply a dev v2_key during image building for upstream, this bug has been silently hidden by the docker layer that runs the bin/setup script.

As a different PR, we need to address bin/setup in our container builds, this is a script for development purposes, we should not supply images with the v2_dev_key installed.



taken from PR: https://github.com/ManageIQ/manageiq-pods/pull/74