Description of problem: Imported PVC/DV/VMs data failed: [cloud-user@ocp-psi-executor hostpath-provisioner-upgrade-master]$ ./import.sh qwang-23-fbfr6-worker-5rkbs qwang-23-fbfr6-worker-d64fh Python 3 is installed, continuing Kubevirt namespace openshift-cnv, CDI namespace openshift-cnv Checking node: qwang-23-fbfr6-worker-5rkbs Found node qwang-23-fbfr6-worker-5rkbs Verifying input directory exists. Verifying input file exists. Checking node: qwang-23-fbfr6-worker-d64fh Found node qwang-23-fbfr6-worker-d64fh Verifying input directory exists. Verifying input file exists. Current CVO requested replicas: 1 Bringing down CVO and OLM, warning this will generate cluster health alerts!!! deployment.apps/cluster-version-operator scaled Current OLM requested replicas: 1 deployment.apps/olm-operator scaled Verified CVO and OLM are down, bringing down kubevirt and CDI Current Kubevirt operator requested replicas: 2 deployment.apps/virt-operator scaled Current virt controller requested replicas: 2 deployment.apps/virt-controller scaled Current CDI operator requested replicas: 1 deployment.apps/cdi-operator scaled Current CDI controller requested replicas: 1 deployment.apps/cdi-deployment scaled Kubevirt and CDI are down, importing Virtual Machines error executing ['create', '-f', '-'] Error from server: error when creating "STDIN": admission webhook "mutatevirtualmachines.example.com" denied the request: v1.VirtualMachine.Spec: v1.VirtualMachineSpec.Running: ReadBool: expect t or f, but found ", error found in #10 byte of ...|running":"false","te|..., bigger context ...|a-Cloud-Base-31-1.9.x86_64.qcow2"}}}}],"running":"false","template":{"metadata":{"labels":{"kubevirt|... Import failed error executing ['create', '-f', '-'] Error from server: error when creating "STDIN": admission webhook "mutatevirtualmachines.example.com" denied the request: v1.VirtualMachine.Spec: v1.VirtualMachineSpec.Running: ReadBool: expect t or f, but found ", error found in #10 byte of ...|running":"false","te|..., bigger context ...|.0-x86_64-disk.qcow2"}}},"status":{}}],"running":"false","template":{"metadata":{"creationTimestamp"|... Import failed Finished importing deployment.apps/cdi-deployment scaled deployment.apps/cdi-operator scaled deployment.apps/virt-controller scaled deployment.apps/virt-operator scaled Kubevirt and CDI restored deployment.apps/olm-operator scaled deployment.apps/cluster-version-operator scaled Finished restoring cluster operations Version-Release number of selected component (if applicable): CNV 2.2 and 2.3 How reproducible: 100% Steps to Reproduce: 1. Prepare PVC/DV/VMs 2. Export data 3. Export KUBEVIRT_NS=openshift-cnv and CDI_NS=openshift-cnv 4. Import data using export.json Actual results: Import fail Expected results: Import success Additional info:
The data not being copied is not a bug, its an artifact of the way we are testing (two separate clusters). In the customer environment this will not be the case, the same node will be used in both clusters and the data will remain on the node. We need to make sure the paths in both clusters for the PVs are identical.
Fixed
The bug has been fixed, so move it to VERIFIED, thanks. Validation steps: [cloud-user@ocp-psi-executor hostpath-provisioner-upgrade-master]$ ls README.md export.sh export_pv.py import.sh import_pv.py qwang-23-fbfr6-worker-5rkbs qwang-23-fbfr6-worker-d64fh [cloud-user@ocp-psi-executor hostpath-provisioner-upgrade-master]$ ./import.sh qwang-23-fbfr6-worker-5rkbs qwang-23-fbfr6-worker-d64fh Python 3 is installed, continuing Kubevirt namespace openshift-cnv, CDI namespace openshift-cnv Checking node: qwang-23-fbfr6-worker-5rkbs Found node qwang-23-fbfr6-worker-5rkbs Verifying input directory exists. Verifying input file exists. Checking node: qwang-23-fbfr6-worker-d64fh Found node qwang-23-fbfr6-worker-d64fh Verifying input directory exists. Verifying input file exists. Current CVO requested replicas: 1 Bringing down CVO and OLM, warning this will generate cluster health alerts!!! deployment.apps/cluster-version-operator scaled Current OLM requested replicas: 1 deployment.apps/olm-operator scaled Verified CVO and OLM are down, bringing down kubevirt and CDI Current Kubevirt operator requested replicas: 2 deployment.apps/virt-operator scaled Current virt controller requested replicas: 2 deployment.apps/virt-controller scaled Current CDI operator requested replicas: 1 deployment.apps/cdi-operator scaled Current CDI controller requested replicas: 1 deployment.apps/cdi-deployment scaled Kubevirt and CDI are down, importing Virtual Machines Created 6 VirtualMachines Created 6 DataVolumes Created 6 PVCs Created 6 PVs Created 3 VirtualMachines Created 3 DataVolumes Created 3 PVCs Created 3 PVs deployment.apps/cdi-deployment scaled deployment.apps/cdi-operator scaled deployment.apps/virt-controller scaled deployment.apps/virt-operator scaled Kubevirt and CDI restored deployment.apps/olm-operator scaled deployment.apps/cluster-version-operator scaled Finished restoring cluster operations
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:2011