Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be available on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1879488 - Azure pvc snapshot migrations fail
Summary: Azure pvc snapshot migrations fail
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Migration Tooling
Version: 4.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.5.0
Assignee: Scott Seago
QA Contact: Xin jiang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-16 12:12 UTC by Sergio
Modified: 2020-09-24 08:15 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-24 08:15:30 UTC
Target Upstream Version:


Attachments (Terms of Use)
restore logs (6.79 KB, text/plain)
2020-09-16 12:12 UTC, Sergio
no flags Details

Description Sergio 2020-09-16 12:12:16 UTC
Created attachment 1715070 [details]
restore logs

Description of problem:
When a PVC is migrated using Copy -> Snapshot, the migration fails.

Version-Release number of selected component (if applicable):
CMT 1.3
SOURCE CLUSTER: OCP 4.2
TARGET CLUSTER:  OCP 4.5
REPLICATION REPOSITORY: AZURE

How reproducible:
Always

Steps to Reproduce:
1. Create a namespace with an application that uses PVCs

oc new-project bztest
oc new-app mysql-persistent

2. Create a migration plan, and chose "Copy" and "Volume snapshot" method when configuring the PVC migration.

3. Migrate

Actual results:

The migration fails. We can find no erros in the restore logs. This is the status of the PVCs in the target cluster after the failure

$ oc get pvc
NAME    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
mysql   Lost     pvc-c2cc0043-f767-11ea-b3f5-0022488efde9   0                         managed-premium   4m31s


We can see in the restore log that the volume was skipped.

time="2020-09-15T15:30:17Z" level=info msg="Adding PV pvc-c2cc0043-f767-11ea-b3f5-0022488efde9 as an additional item to restore" cmd=/velero logSource="pkg/restore/add_pv_from_pvc_action.go:66" pluginName=velero restore=openshift-migration/f8ce4020-f767-11ea-afb5-03fb410600cc-7cz8b
time="2020-09-15T15:30:17Z" level=info msg="Skipping persistentvolumes/pvc-c2cc0043-f767-11ea-b3f5-0022488efde9 because it's already been restored." logSource="pkg/restore/restore.go:844" restore=openshift-migration/f8ce4020-f767-11ea-afb5-03fb410600cc-7cz8b


Expected results:

The migration should have finished without errors and the PVC should be able to find the snapshot PV in the target cluster.



Additional info:

The volume can be found in the source cluster's resource group and in the replication repository's resource group. But not in the target cluster's resource group.

Comment 1 Xin jiang 2020-09-24 08:15:30 UTC
this is not a bug, because we missed AZURE_RESOURCE_GROUP


Note You need to log in before you can comment on or make changes to this bug.