Bug 2238974 - Migration failing on Azure due to authorization issue
Summary: Migration failing on Azure due to authorization issue
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Migration Toolkit for Containers
Classification: Red Hat
Component: General
Version: 1.8.0
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 1.8.0
Assignee: Dylan Murray
QA Contact: ssingla
URL:
Whiteboard:
Depends On:
Blocks: 2247120
TreeView+ depends on / blocked
 
Reported: 2023-09-14 15:35 UTC by ssingla
Modified: 2023-10-30 19:25 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2247120 (view as bug list)
Environment:
Last Closed: 2023-10-05 01:04:10 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github migtools mig-controller pull 1353 0 None Merged Bug 2238974 - set subscriptionId on BSL (#1352) 2023-09-22 17:16:26 UTC
Red Hat Product Errata RHSA-2023:5447 0 None None None 2023-10-05 01:04:28 UTC

Description ssingla 2023-09-14 15:35:38 UTC
Description of problem:
On trying to perform a migration on azure cluster backing up to Azure storage. The migration fails on the Backup stage.

Version-Release number of selected component (if applicable):
1.8.0

How reproducible:
Always

Steps to Reproduce:
1. Deployed MTC on source and target azure clusters
2. Created an azure storage as mentioned in the oadp docs (provided "Contributor" role to the new azure storage account created) and then connect the replication repository providing the storage details.
3. Deploy a stateful app on source cluster
4. Try Cutover migration

Actual results:
Migration status -> Cutover failed

Backup has this error:
  status:
    completionTimestamp: "2023-09-14T12:52:32Z"
    expiration: "2023-10-14T12:52:31Z"
    failureReason: |
      error checking if backup already exists in object storage: rpc error: code = Unknown desc = HEAD https://veleroa1f1f96d0b75.blob.core.windows.net/velero/velero/backups/migration-fb20f-initial-7crm9/velero-backup.json
      --------------------------------------------------------------------------------
      RESPONSE 403: 403 This request is not authorized to perform this operation using this permission.
      ERROR CODE: AuthorizationPermissionMismatch
      --------------------------------------------------------------------------------
      Response contained no body
      --------------------------------------------------------------------------------
    formatVersion: 1.1.0
    phase: Failed
    startTimestamp: "2023-09-14T12:52:31Z"
    version: 1



Expected results:
Migration should not fail.

Additional info:

Used this doc to configure the azure storage:
https://docs.openshift.com/container-platform/4.13/backup_and_restore/application_backup_and_restore/installing/installing-oadp-azure.html#migration-configuring-azure_installing-oadp-azure

Except 1 step which is missing in the above docs (a separate bug needed for this):

AZURE_ROLE="Contributor" // Provided this role which is more privileged.

AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name $AZURE_STORAGE_ACCOUNT_ID --role $AZURE_ROLE --query 'password' --scopes /subscriptions/$AZURE_SUBSCRIPTION_ID -o tsv`
AZURE_CLIENT_ID=`az ad sp list --display-name $AZURE_STORAGE_ACCOUNT_ID --query '[0].appId' -o tsv`

Comment 1 Dylan Murray 2023-09-19 18:25:49 UTC
First thing I'm noticing is that the provided Azure config doesn't contain a resource group variable. Is that intentional?

Comment 9 errata-xmlrpc 2023-10-05 01:04:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Migration Toolkit for Containers (MTC) 1.8.0 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:5447


Note You need to log in before you can comment on or make changes to this bug.