Description of problem: Run Cirros VM import from RHV to CNV via api. While VM import is running and VM disk is imported, delete the vm import controller pod. The vm import controller pod is run again automatically. The cdi importer pod CrashLoopBackoff, and finally terminated, and the VM import stay in a failed status: "mport error (RHV) cirros-import1 could not be imported. DataVolumeCreationFailed: Error while importing disk image: cirros-import1-00aad0a5-98c0-4a44-877c-24e34155a91e. pod CrashLoopBackoff restart exceeded" Version-Release number of selected component (if applicable): CNV-2.5
Created attachment 1719083 [details] vm import yaml
Created attachment 1719084 [details] vm import controller log
It seems to be cdi issue. @Alex please take a look.
Can you post the cdi import pod logs and yaml?
I tried several times to reproduce bug on the exact same env bug couldn't. I think that when I reported the bug, UI showed progress of 22%.
Since this was not reproduced so far, I've set the target release to 2.5.1 to give us a chance to reproduce, while not blocking 2.5.0 release.
@Ilanit, let's try to reproduce it for times more.
Reproduced. This is specific to Ceph-RBD/Block storage. (I was trying to reproduce before on NFS - this is why it didn't reproduce). Details: Secret creation: --------------- $ cat <<EOF | oc create -f - --- apiVersion: v1 kind: Secret metadata: name: blue-secret namespace: default type: Opaque stringData: ovirt: | apiUrl: "https://<RHv FQDN>/ovirt-engine/api" username: <username> password: <password> caCert: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- EOF Resource Mapping: ---------------- $ cat <<EOF | oc create -f - apiVersion: v2v.kubevirt.io/v1beta1 kind: ResourceMapping metadata: name: example-resourcemappings namespace: default spec: ovirt: networkMappings: - source: name: ovirtmgmt/ovirtmgmt target: name: pod type: pod storageMappings: - source: name: v2v-fc target: name: ocs-storagecluster-ceph-rbd volumeMode: Block EOF VM import: --------- $ cat <<EOF | oc create -f - apiVersion: v2v.kubevirt.io/v1beta1 kind: VirtualMachineImport metadata: name: example-virtualmachineimport namespace: default spec: providerCredentialsSecret: name: blue-secret namespace: default # optional, if not specified, use CR's namespace resourceMapping: name: example-resourcemappings namespace: default targetVmName: cirros-import startVm: false source: ovirt: vm: id: 6593a4ad-e037-4dfd-8b3d-fa1450ad6122 Few seconds after cdi importer pod was created run $ oc delete pod vm-import-controller-7d569497bc-fvtqd -n openshift-cnv In the UI VM import progress showed 26%. VM import failed as reported in the bug description. importer log: $ oc logs importer-cirros-import-00aad0a5-98c0-4a44-877c-24e34155a91e -f I1007 07:01:38.313631 1 importer.go:52] Starting importer I1007 07:01:38.313800 1 importer.go:116] begin import process I1007 07:01:45.027577 1 http-datasource.go:219] Attempting to get certs from /certs/ca.pem I1007 07:01:45.093108 1 data-processor.go:302] Calculating available size I1007 07:01:45.093251 1 data-processor.go:314] Checking out file system volume size. I1007 07:01:45.093280 1 data-processor.go:322] Request image size not empty. I1007 07:01:45.093332 1 data-processor.go:327] Target size 34792448. I1007 07:01:45.095327 1 data-processor.go:224] New phase: TransferDataFile I1007 07:01:45.095810 1 util.go:161] Writing data... I1007 07:01:46.096543 1 prometheus.go:69] 25.93 E1007 07:01:46.516916 1 util.go:163] Unable to write file from dataReader: write /data/disk.img: no space left on device E1007 07:01:46.534844 1 data-processor.go:221] write /data/disk.img: no space left on device unable to write to file kubevirt.io/containerized-data-importer/pkg/util.StreamDataToFile /go/src/kubevirt.io/containerized-data-importer/pkg/util/util.go:165 kubevirt.io/containerized-data-importer/pkg/importer.(*ImageioDataSource).TransferFile /go/src/kubevirt.io/containerized-data-importer/pkg/importer/imageio-datasource.go:115 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:191 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:153 main.main /go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:171 runtime.main /usr/lib/golang/src/runtime/proc.go:203 runtime.goexit /usr/lib/golang/src/runtime/asm_amd64.s:1357 Unable to transfer source data to target file kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:193 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:153 main.main /go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:171 runtime.main /usr/lib/golang/src/runtime/proc.go:203 runtime.goexit /usr/lib/golang/src/runtime/asm_amd64.s:1357 E1007 07:01:46.535079 1 importer.go:173] write /data/disk.img: no space left on device unable to write to file kubevirt.io/containerized-data-importer/pkg/util.StreamDataToFile /go/src/kubevirt.io/containerized-data-importer/pkg/util/util.go:165 kubevirt.io/containerized-data-importer/pkg/importer.(*ImageioDataSource).TransferFile /go/src/kubevirt.io/containerized-data-importer/pkg/importer/imageio-datasource.go:115 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:191 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:153 main.main /go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:171 runtime.main /usr/lib/golang/src/runtime/proc.go:203 runtime.goexit /usr/lib/golang/src/runtime/asm_amd64.s:1357 Unable to transfer source data to target file kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:193 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:153 main.main /go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:171 runtime.main /usr/lib/golang/src/runtime/proc.go:203 runtime.goexit /usr/lib/golang/src/runtime/asm_amd64.s:1357 After this failure, I tried to VM import this same VM, and found that this failure also deleted the resource mapping. I created again the resource mapping, mentioned above, and the VM import failed. (I did not touch the controller pod) That was the importer log: $ oc logs importer-cirros-import-00aad0a5-98c0-4a44-877c-24e34155a91e -f I1007 07:30:10.843514 1 importer.go:52] Starting importer I1007 07:30:10.843747 1 importer.go:116] begin import process I1007 07:30:12.660758 1 http-datasource.go:219] Attempting to get certs from /certs/ca.pem I1007 07:30:12.696263 1 data-processor.go:302] Calculating available size I1007 07:30:12.696401 1 data-processor.go:314] Checking out file system volume size. I1007 07:30:12.696432 1 data-processor.go:322] Request image size not empty. I1007 07:30:12.696504 1 data-processor.go:327] Target size 34792448. I1007 07:30:12.700573 1 data-processor.go:224] New phase: TransferDataFile I1007 07:30:12.701446 1 util.go:161] Writing data... I1007 07:30:13.703996 1 prometheus.go:69] 25.93 E1007 07:30:14.258729 1 util.go:163] Unable to write file from dataReader: write /data/disk.img: no space left on device E1007 07:30:14.269879 1 data-processor.go:221] write /data/disk.img: no space left on device unable to write to file kubevirt.io/containerized-data-importer/pkg/util.StreamDataToFile /go/src/kubevirt.io/containerized-data-importer/pkg/util/util.go:165 kubevirt.io/containerized-data-importer/pkg/importer.(*ImageioDataSource).TransferFile /go/src/kubevirt.io/containerized-data-importer/pkg/importer/imageio-datasource.go:115 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:191 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:153 main.main /go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:171 runtime.main /usr/lib/golang/src/runtime/proc.go:203 runtime.goexit /usr/lib/golang/src/runtime/asm_amd64.s:1357 Unable to transfer source data to target file kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:193 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:153 main.main /go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:171 runtime.main /usr/lib/golang/src/runtime/proc.go:203 runtime.goexit /usr/lib/golang/src/runtime/asm_amd64.s:1357 E1007 07:30:14.270030 1 importer.go:173] write /data/disk.img: no space left on device unable to write to file kubevirt.io/containerized-data-importer/pkg/util.StreamDataToFile /go/src/kubevirt.io/containerized-data-importer/pkg/util/util.go:165 kubevirt.io/containerized-data-importer/pkg/importer.(*ImageioDataSource).TransferFile /go/src/kubevirt.io/containerized-data-importer/pkg/importer/imageio-datasource.go:115 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:191 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:153 main.main /go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:171 runtime.main /usr/lib/golang/src/runtime/proc.go:203 runtime.goexit /usr/lib/golang/src/runtime/asm_amd64.s:1357 Unable to transfer source data to target file kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:193 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:153 main.main /go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:171 runtime.main /usr/lib/golang/src/runtime/proc.go:203 runtime.goexit /usr/lib/golang/src/runtime/asm_amd64.s:1357 Suspecting that there is a real space issue on Ceph-RBD, I then run same VM import but for nfs/Filesystem (had nfs default ns, and removed storage mapping from the resource mapping)- This VM import was successful. I then restored resource mapping to have the storage mapping to ceph-RBD/Block, run VM import, and this import ended up successfully.
I think you hit BZ #1883908. It seems like we could ownerReference for ResourceMapping and once the import fails we clean it up. Thanks for reproducing.
Note that BZ #1883908 was reported for Ceph-RBD/Filesystem and not Ceph-RBD/Block, as used here.
BZ #1883908 will be available in 2.6 and we already have BZ #1864577 which would use it. *** This bug has been marked as a duplicate of bug 1864577 ***