Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1784447

Summary: Migration with Azure object storage fails because operator and controller versions out of sync
Product: OpenShift Container Platform Reporter: Sergio <sregidor>
Component: Migration ToolingAssignee: John Matthews <jmatthew>
Status: CLOSED ERRATA QA Contact: Sergio <sregidor>
Severity: high Docs Contact:
Priority: high    
Version: 4.2.0CC: apinnick, chezhang, ernelson, rpattath, xjiang
Target Milestone: ---   
Target Release: 4.2.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1784451 (view as bug list) Environment:
Last Closed: 2019-12-19 15:44:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1784451    
Bug Blocks:    

Description Sergio 2019-12-17 13:32:47 UTC
Description of problem:
When an Azure storage object it defined in a Azure to Azure migration, the secret is not correctly identified and velero fails with an error.

Version-Release number of selected component (if applicable):

$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.2.0-0.nightly-2019-12-15-230238   True        False         3h4m    Cluster version is 4.2.0-0.nightly-2019-12-15-230238

Operator: 1.0.1 osbs images
image: image-registry.openshift-image-registry.svc:5000/rhcam-1-0/openshift-migration-rhel7-operator@sha256:f41484cbe7dbc4e4522fbcd63adc0dc926d463e3516c82ec4360a91441f84fd4

Controller:  1.0.1 osbs images
    image: image-registry.openshift-image-registry.svc:5000/rhcam-1-0/openshift-migration-controller-rhel8@sha256:e2c3cbb61157605d8246496f77c76b9b2950eb951bd0a63d4f8e3ae6f1884c2c

Velero:
      image-registry.openshift-image-registry.svc:5000/rhcam-1-0/openshift-migration-plugin-rhel8@sha256:9107a197ab0a1a5a13e47c0c9cd4582de81745f916b6a28aa78cb09428e00afa
        image-registry.openshift-image-registry.svc:5000/rhcam-1-0/openshift-migration-velero-rhel8@sha256:c59e1b3f35376fbf71352dba72710a9a4395c172168adfe3d57eb8dbf23194bb

How reproducible:
Always

Steps to Reproduce:
1. Create 2 Azure clusters
2. Install App migration tool in order to migrate from source to target.
3. Use an Azure storage object to perform the migration
4. Migrate any application

Actual results:
Velero gives this error and migration cannot be executed.

time="2019-12-16T17:19:03Z" level=error msg="Error getting backup store for this location" backupLocation=azure-qlk7n controller=backup-sync error="rpc error: code = Unknown desc = unable to get all required environment variables: the following keys do not have values: AZURE_TENANT_ID, AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_SUBSCRIPTION_ID" error.file="/go/src/github.com/heptio/velero/pkg/cloudprovider/azure/object_store.go:146" error.function=github.com/heptio/velero/pkg/cloudprovider/azure.getStorageAccountKey logSource="pkg/controller/backup_sync_controller.go:168"

Expected results:
The storage object should be handled properly and the migrations should finish with no failures

Additional info:
This problem happens because controller and operator are out of sync here:

https://github.com/fusor/mig-operator/pull/166 and https://github.com/fusor/mig-controller/pull/373 are related

Both need to be merged to 1.0.1 version.

Comment 1 Erik Nelson 2019-12-17 14:33:26 UTC
Backed up to MODIFIED so this could get added to the errata.

Comment 3 Sergio 2019-12-17 15:53:54 UTC
Verified in 1.0.1 osbs images.

Snapshot migrations can be executed without problems in Azure.

Comment 5 errata-xmlrpc 2019-12-19 15:44:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:4304