Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1777481

Summary: Only "Velero_Backups" resources group is allowed to execute an Azure snapshot migration
Product: OpenShift Container Platform Reporter: Sergio <sregidor>
Component: Migration ToolingAssignee: Danil Grigorev <dgrigore>
Status: CLOSED ERRATA QA Contact: Sergio <sregidor>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.2.0CC: chezhang, dgrigore, ernelson, jmatthew, rpattath, xjiang
Target Milestone: ---   
Target Release: 4.2.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of:
: 1777658 (view as bug list) Environment:
Last Closed: 2019-12-19 15:44:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1777658    
Bug Blocks:    

Description Sergio 2019-11-27 16:07:02 UTC
Description of problem:
When we try to execute a migration using Azure snapshots, it only works if the storage account is created using "Velero_Backups" resource groups. If the storage account belongs to any other resources group it fails.

Version-Release number of selected component (if applicable):
Target cluster on Azure:
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.2.0-0.nightly-2019-11-25-200935   True        False         29h     Cluster version is 4.2.0-0.nightly-2019-11-25-200935

Source cluster on Azure:
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.2.0-0.nightly-2019-11-24-111327   True        False         2d12h   Cluster version is 4.2.0-0.nightly-2019-11-24-111327


UI image 1.0.1 stored in rh-osbs

  - image: image-registry.openshift-image-registry.svc:5000/rhcam-1-0/openshift-migration-ui-rhel8@sha256:5e04d383b3796982aaf366c7bb6742661bc4b4964a326ee8db20c533f1831342

How reproducible:
Always

Steps to Reproduce:
1. Create any storage account in any resource group different from "Velero_Backups"

   follow this instructions: https://github.com/fusor/mig-operator/blob/45edcc3a508dbca3e6f5aca75d17c2ff744e6177/docs/usage/ObjectStorage.md
2. Configure the store account in order to be used in a snapshot migration from Azure to Azure
3. Execute a snapshot migration

Actual results:
This error is shown in velero, and the migration fails

time="2019-11-27T12:28:05Z" level=info msg="1 errors encountered backup up item" backup=openshift-migration/4dc00ed0-1111-11ea-b826-69de3f381
6c4-5kg68 group=v1 logSource="pkg/backup/resource_backupper.go:260" name=nginx-logs namespace=nginx-pv resource=persistentvolumeclaims
time="2019-11-27T12:28:05Z" level=error msg="Error backing up item" backup=openshift-migration/4dc00ed0-1111-11ea-b826-69de3f3816c4-5kg68 err
or="error taking snapshot of volume: rpc error: code = Unknown desc = compute.SnapshotsClient#CreateOrUpdate: Failure sending request: Status
Code=0 -- Original Error: autorest/azure: Service returned an error. Status=404 Code=\"ResourceGroupNotFound\" Message=\"Resource group 'Vele
ro_Backups' could not be found.\"" group=v1 logSource="pkg/backup/resource_backupper.go:264" name=nginx-logs namespace=nginx-pv resource=pers
istentvolumeclaims

Expected results:
The migration should be executed OK

Additional info:

Comment 2 Erik Nelson 2019-12-10 19:07:03 UTC
https://github.com/fusor/mig-ui/pull/630

Comment 4 Sergio 2019-12-17 15:49:17 UTC
Verified in 1.0.1 osbs images.

Any resources group name is valid now for snapshots.

Comment 6 errata-xmlrpc 2019-12-19 15:44:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:4304