Bug 1807820 - [Customer0 data migration] Failed to import PVC/DV/VM data
Summary: [Customer0 data migration] Failed to import PVC/DV/VM data
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Storage
Version: 2.3.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 2.3.0
Assignee: Adam Litke
QA Contact: Qixuan Wang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-02-27 10:13 UTC by Qixuan Wang
Modified: 2021-01-04 19:02 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-04 19:10:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2020:2011 0 None None None 2020-05-04 19:11:06 UTC

Description Qixuan Wang 2020-02-27 10:13:03 UTC
Description of problem:
Imported PVC/DV/VMs data failed:

[cloud-user@ocp-psi-executor hostpath-provisioner-upgrade-master]$ ./import.sh qwang-23-fbfr6-worker-5rkbs qwang-23-fbfr6-worker-d64fh
Python 3 is installed, continuing
Kubevirt namespace openshift-cnv, CDI namespace openshift-cnv
Checking node: qwang-23-fbfr6-worker-5rkbs
Found node qwang-23-fbfr6-worker-5rkbs
Verifying input directory exists.
Verifying input file exists.
Checking node: qwang-23-fbfr6-worker-d64fh
Found node qwang-23-fbfr6-worker-d64fh
Verifying input directory exists.
Verifying input file exists.
Current CVO requested replicas: 1
Bringing down CVO and OLM, warning this will generate cluster health alerts!!!
deployment.apps/cluster-version-operator scaled
Current OLM requested replicas: 1
deployment.apps/olm-operator scaled
Verified CVO and OLM are down, bringing down kubevirt and CDI
Current Kubevirt operator requested replicas: 2
deployment.apps/virt-operator scaled
Current virt controller requested replicas: 2
deployment.apps/virt-controller scaled
Current CDI operator requested replicas: 1
deployment.apps/cdi-operator scaled
Current CDI controller requested replicas: 1
deployment.apps/cdi-deployment scaled
Kubevirt and CDI are down, importing Virtual Machines
error executing ['create', '-f', '-']
Error from server: error when creating "STDIN": admission webhook "mutatevirtualmachines.example.com" denied the request: v1.VirtualMachine.Spec: v1.VirtualMachineSpec.Running: ReadBool: expect t or f, but found ", error found in #10 byte of ...|running":"false","te|..., bigger context ...|a-Cloud-Base-31-1.9.x86_64.qcow2"}}}}],"running":"false","template":{"metadata":{"labels":{"kubevirt|...

Import failed
error executing ['create', '-f', '-']
Error from server: error when creating "STDIN": admission webhook "mutatevirtualmachines.example.com" denied the request: v1.VirtualMachine.Spec: v1.VirtualMachineSpec.Running: ReadBool: expect t or f, but found ", error found in #10 byte of ...|running":"false","te|..., bigger context ...|.0-x86_64-disk.qcow2"}}},"status":{}}],"running":"false","template":{"metadata":{"creationTimestamp"|...

Import failed
Finished importing
deployment.apps/cdi-deployment scaled
deployment.apps/cdi-operator scaled
deployment.apps/virt-controller scaled
deployment.apps/virt-operator scaled
Kubevirt and CDI restored
deployment.apps/olm-operator scaled
deployment.apps/cluster-version-operator scaled
Finished restoring cluster operations



Version-Release number of selected component (if applicable):
CNV 2.2 and 2.3


How reproducible:
100%


Steps to Reproduce:
1. Prepare PVC/DV/VMs
2. Export data
3. Export KUBEVIRT_NS=openshift-cnv and CDI_NS=openshift-cnv
4. Import data using export.json


Actual results:
Import fail


Expected results:
Import success


Additional info:

Comment 1 Alexander Wels 2020-02-27 13:10:45 UTC
The data not being copied is not a bug, its an artifact of the way we are testing (two separate clusters). In the customer environment this will not be the case, the same node will be used in both clusters and the data will remain on the node. We need to make sure the paths in both clusters for the PVs are identical.

Comment 2 Alexander Wels 2020-02-27 15:26:27 UTC
Fixed

Comment 3 Qixuan Wang 2020-02-28 04:00:47 UTC
The bug has been fixed, so move it to VERIFIED, thanks.


Validation steps:
[cloud-user@ocp-psi-executor hostpath-provisioner-upgrade-master]$ ls
README.md  export.sh  export_pv.py  import.sh  import_pv.py  qwang-23-fbfr6-worker-5rkbs  qwang-23-fbfr6-worker-d64fh

[cloud-user@ocp-psi-executor hostpath-provisioner-upgrade-master]$ ./import.sh qwang-23-fbfr6-worker-5rkbs qwang-23-fbfr6-worker-d64fh
Python 3 is installed, continuing
Kubevirt namespace openshift-cnv, CDI namespace openshift-cnv
Checking node: qwang-23-fbfr6-worker-5rkbs
Found node qwang-23-fbfr6-worker-5rkbs
Verifying input directory exists.
Verifying input file exists.
Checking node: qwang-23-fbfr6-worker-d64fh
Found node qwang-23-fbfr6-worker-d64fh
Verifying input directory exists.
Verifying input file exists.
Current CVO requested replicas: 1
Bringing down CVO and OLM, warning this will generate cluster health alerts!!!
deployment.apps/cluster-version-operator scaled
Current OLM requested replicas: 1
deployment.apps/olm-operator scaled
Verified CVO and OLM are down, bringing down kubevirt and CDI
Current Kubevirt operator requested replicas: 2
deployment.apps/virt-operator scaled
Current virt controller requested replicas: 2
deployment.apps/virt-controller scaled
Current CDI operator requested replicas: 1
deployment.apps/cdi-operator scaled
Current CDI controller requested replicas: 1
deployment.apps/cdi-deployment scaled
Kubevirt and CDI are down, importing Virtual Machines
Created 6 VirtualMachines
Created 6 DataVolumes
Created 6 PVCs
Created 6 PVs
Created 3 VirtualMachines
Created 3 DataVolumes
Created 3 PVCs
Created 3 PVs
deployment.apps/cdi-deployment scaled
deployment.apps/cdi-operator scaled
deployment.apps/virt-controller scaled
deployment.apps/virt-operator scaled
Kubevirt and CDI restored
deployment.apps/olm-operator scaled
deployment.apps/cluster-version-operator scaled
Finished restoring cluster operations

Comment 7 errata-xmlrpc 2020-05-04 19:10:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:2011


Note You need to log in before you can comment on or make changes to this bug.