Bug 1858595 - [v2v] [RFE] VM import RHV to CNV The import should not fail when there is no space left by the ceph provider
Summary: [v2v] [RFE] VM import RHV to CNV The import should not fail when there is no ...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: V2V
Version: 2.4.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 2.5.1
Assignee: Ondra Machacek
QA Contact: Amos Mastbaum
URL:
Whiteboard:
: 1721504 (view as bug list)
Depends On:
Blocks: 1893529
TreeView+ depends on / blocked
 
Reported: 2020-07-19 11:34 UTC by Amos Mastbaum
Modified: 2021-01-05 08:57 UTC (History)
7 users (show)

Fixed In Version: v2.5.0-18
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-02 10:53:17 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt vm-import-operator pull 367 0 None closed Add event on import pod failure 2021-01-05 08:56:50 UTC

Description Amos Mastbaum 2020-07-19 11:34:04 UTC
Description of problem:

The Result of trying to import a vm with a disk larger than is available  by the cephs file provider should put the import in pending, with a proper message until enough space is available.



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Import a VM, using target storage file cephs provider,with a disk larger than what is available but by this provider.
2. Monitor the cdi-importer
3. After the cdi-importer has crashed, continue monitor the new importer
4. Monitor the space available by cephs 

Actual results:

The import fails with "DataValueCreationgFailed"
After the cdi-importer crashes 2 time, the import is failing with the following error:

DataVolumeCreation Fail



Expected results:
The Import should remain pending wit ha proper message 


Additional info:

***cdi-importer log
Unable to write to filekubevirt.io/containerized-data-importer/pkg/util.StreamDataToFile	/go/src/kubevirt.io/containerized-data-importer/pkg/util/util.go:165kubevirt.io/containerized-data-importer/pkg/importer.(*ImageioDataSource).TransferFile	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/imageio-datasource.go:108kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:180kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:142main.main	/go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:157runtime.main	/usr/lib/golang/src/runtime/proc.go:203runtime.goexit	/usr/lib/golang/src/runtime/asm_amd64.s:1357Unable to transfer source data to target filekubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:182kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:142main.main	/go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:157runtime.main	/usr/lib/golang/src/runtime/proc.go:203runtime.goexit	/usr/lib/golang/src/runtime/asm_amd64.s:1357E0716 11:47:01.724044 1 importer.go:159] write /data/disk.img: no space left on deviceunable to write to filekubevirt.io/containerized-data-importer/pkg/util.StreamDataToFile	/go/src/kubevirt.io/containerized-data-importer/pkg/util/util.go:165kubevirt.io/containerized-data-importer/pkg/importer.(*ImageioDataSource).TransferFile	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/imageio-datasource.go:108kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:180kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:142main.main	/go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:157runtime.main	/usr/lib/golang/src/runtime/proc.go:203runtime.goexit	/usr/lib/golang/src/runtime/asm_amd64.s:1357Unable to transfer source data to target filekubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:182kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:142main.main	/go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:157runtime.main	/usr/lib/golang/src/runtime/proc.go:203runtime.goexit	/usr/lib/golang/src/runtime/asm_amd64.s:1357



**import-cont-log
{"level":"error","ts":1594894152.9632962,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"virtualmachineimport-controller","request":"amastbau1/a1","error":"Import failure clean-up for amastbau1/a1 failed: clean-up for amastbau1/a1 failed: Delete of datavolumes of VM import a1/amastbau1 failed: Find of datavolumes of VM import a1/amastbau1 failed: [DataVolume.cdi.kubevirt.io \"a1-1367becd-d41d-498a-854c-2dff3630316a\" not found DataVolume.cdi.kubevirt.io \"a1-8842557a-e3e9-4da2-bd23-5339dac8026e\" not found], VirtualMachine.kubevirt.io \"a1\" not found","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}


***pods/importer-a3-1367becd-d41d-498a-854c-2dff3630316a
Name:           importer-a3-1367becd-d41d-498a-854c-2dff3630316a
Namespace:      amastbau1
Priority:       0
Node:           istein2-fg47m-worker-mb8br/192.168.3.64
Start Time:     Thu, 16 Jul 2020 15:13:21 +0300
Labels:         app=containerized-data-importer
                cdi.kubevirt.io=importer
                prometheus.cdi.kubevirt.io=
Annotations:    cdi.kubevirt.io/storage.createdByController: yes
                openshift.io/scc: containerized-data-importer
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  PersistentVolumeClaim/a3-1367becd-d41d-498a-854c-2dff3630316a
Containers:
  importer:
    Container ID:  
    Image:         registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-importer@sha256:428737e5c3b4122e9d3eb06551fb197358ef893cd26e0b591e445d2523e22765
    Image ID:      
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      -v=1
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     0
      memory:  0
    Requests:
      cpu:     0
      memory:  0
    Environment:
      IMPORTER_SOURCE:         imageio
      IMPORTER_ENDPOINT:       https://rhev-red-03.rdu2.scalelab.redhat.com/ovirt-engine/api
      IMPORTER_CONTENTTYPE:    kubevirt
      IMPORTER_IMAGE_SIZE:     100Gi
      OWNER_UID:               f1949635-cbf6-4dce-b334-5f42e03460a6
      INSECURE_TLS:            false
      IMPORTER_DISK_ID:        1367becd-d41d-498a-854c-2dff3630316a
      IMPORTER_ACCESS_KEY_ID:  <set to the key 'accessKeyId' in secret 'vmimport.v2v.kubevirt.iopg7s8'>  Optional: false
      IMPORTER_SECRET_KEY:     <set to the key 'secretKey' in secret 'vmimport.v2v.kubevirt.iopg7s8'>    Optional: false
      IMPORTER_CERT_DIR:       /certs
    Mounts:
      /certs from cdi-cert-vol (rw)
      /data from cdi-data-vol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-pmhzl (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  cdi-data-vol:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  a3-1367becd-d41d-498a-854c-2dff3630316a
    ReadOnly:   false
  cdi-cert-vol:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      vmimport.v2v.kubevirt.iojqxw5
    Optional:  false
  default-token-pmhzl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-pmhzl
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age                   From                                 Message
  ----     ------       ----                  ----                                 -------
  Normal   Scheduled    26m                   default-scheduler                    Successfully assigned amastbau1/importer-a3-1367becd-d41d-498a-854c-2dff3630316a to istein2-fg47m-worker-mb8br
  Warning  FailedMount  22m                   kubelet, istein2-fg47m-worker-mb8br  MountVolume.MountDevice failed for volume "pvc-c287f53d-23b7-47c9-ad20-cbabba235f89" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
  Warning  FailedMount  5m51s (x3 over 17m)   kubelet, istein2-fg47m-worker-mb8br  Unable to attach or mount volumes: unmounted volumes=[cdi-data-vol], unattached volumes=[default-token-pmhzl cdi-data-vol cdi-cert-vol]: timed out waiting for the condition
  Warning  FailedMount  2m22s (x17 over 22m)  kubelet, istein2-fg47m-worker-mb8br  MountVolume.MountDevice failed for volume "pvc-c287f53d-23b7-47c9-ad20-cbabba235f89" : rpc error: code = Aborted desc = an operation with the given Volume ID 0001-0011-openshift-storage-0000000000000001-c07e7bd4-c756-11ea-a9e7-0a580a81021e already exists
  Warning  FailedMount  80s (x8 over 24m)     kubelet, istein2-fg47m-worker-mb8br  Unable to attach or mount volumes: unmounted volumes=[cdi-data-vol], unattached volumes=[cdi-data-vol cdi-cert-vol default-token-pmhzl]: timed out waiting for the condition


[root@puma44 amastbaum]# oc describe pvc/a3-1367becd-d41d-498a-854c-2dff3630316a
Name:          a3-1367becd-d41d-498a-854c-2dff3630316a
Namespace:     amastbau1
StorageClass:  ocs-storagecluster-cephfs
Status:        Bound
Volume:        pvc-c287f53d-23b7-47c9-ad20-cbabba235f89
Labels:        app=containerized-data-importer
Annotations:   cdi.kubevirt.io/storage.condition.running: false
               cdi.kubevirt.io/storage.condition.running.message: 
               cdi.kubevirt.io/storage.condition.running.reason: ContainerCreating
               cdi.kubevirt.io/storage.import.certConfigMap: vmimport.v2v.kubevirt.iojqxw5
               cdi.kubevirt.io/storage.import.diskId: 1367becd-d41d-498a-854c-2dff3630316a
               cdi.kubevirt.io/storage.import.endpoint: https://rhev-red-03.rdu2.scalelab.redhat.com/ovirt-engine/api
               cdi.kubevirt.io/storage.import.importPodName: importer-a3-1367becd-d41d-498a-854c-2dff3630316a
               cdi.kubevirt.io/storage.import.secretName: vmimport.v2v.kubevirt.iopg7s8
               cdi.kubevirt.io/storage.import.source: imageio
               cdi.kubevirt.io/storage.pod.phase: Pending
               cdi.kubevirt.io/storage.pod.restarts: 0
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: openshift-storage.cephfs.csi.ceph.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      100Gi
Access Modes:  RWX
VolumeMode:    Filesystem
Mounted By:    importer-a3-1367becd-d41d-498a-854c-2dff3630316a
Events:
  Type     Reason                 Age                 From                                                                                                                     Message
  ----     ------                 ----                ----                                                                                                                     -------
  Normal   Provisioning           80m                 openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-59b78f947-p2jht_577f5920-df82-423e-b2a6-9bf9bac75c90  External provisioner is provisioning volume for claim "amastbau1/a3-1367becd-d41d-498a-854c-2dff3630316a"
  Normal   ExternalProvisioning   80m (x4 over 80m)   persistentvolume-controller                                                                                              waiting for a volume to be created, either by external provisioner "openshift-storage.cephfs.csi.ceph.com" or manually created by system administrator
  Normal   ProvisioningSucceeded  80m                 openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-59b78f947-p2jht_577f5920-df82-423e-b2a6-9bf9bac75c90  Successfully provisioned volume pvc-c287f53d-23b7-47c9-ad20-cbabba235f89
  Warning  ErrImportFailed        56m (x4 over 56m)   import-controller                                                                                                        Unable to process data: write /data/disk.img: no space left on device
  Warning  ErrImportFailed        55m (x16 over 56m)  import-controller                                                                                                        Unable to connect to imageio data source: Fault reason is "Operation Failed". Fault detail is "[Cannot transfer Virtual Disk: The following disks are locked: virt-enabled-vm_Disk1_amastbau.rhel8. Please try again in a few minutes.]". HTTP response code is "409". HTTP response message is "409 Conflict".
  Warning  ErrImportFailed        31m (x5 over 31m)   import-controller                                                                                                        Unable to connect to imageio data source: Fault reason is "Operation Failed". Fault detail is "[Cannot transfer Virtual Disk: The following disks are locked: virt-enabled-vm_Disk1_amastbau.rhel8. Please try again in a few minutes.]". HTTP response code is "409". HTTP response message is "409 Conflict".

[root@puma44 amastbaum]# oc describe pv/pvc-c287f53d-23b7-47c9-ad20-cbabba235f89
Name:            pvc-c287f53d-23b7-47c9-ad20-cbabba235f89
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    ocs-storagecluster-cephfs
Status:          Bound
Claim:           amastbau1/a3-1367becd-d41d-498a-854c-2dff3630316a
Reclaim Policy:  Delete
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        100Gi
Node Affinity:   <none>
Message:         
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            openshift-storage.cephfs.csi.ceph.com
    FSType:            ext4
    VolumeHandle:      0001-0011-openshift-storage-0000000000000001-c07e7bd4-c756-11ea-a9e7-0a580a81021e
    ReadOnly:          false
    VolumeAttributes:      clusterID=openshift-storage
                           fsName=ocs-storagecluster-cephfilesystem
                           storage.kubernetes.io/csiProvisionerIdentity=1594897186407-8081-openshift-storage.cephfs.csi.ceph.com
Events:                <none>
[root@puma44 amastbaum]#

[root@puma44 amastbaum]# oc describe pods/importer-a3-8842557a-e3e9-4da2-bd23-5339dac8026e
Name:           importer-a3-8842557a-e3e9-4da2-bd23-5339dac8026e
Namespace:      amastbau1
Priority:       0
Node:           istein2-fg47m-worker-mb8br/192.168.3.64
Start Time:     Thu, 16 Jul 2020 15:13:21 +0300
Labels:         app=containerized-data-importer
                cdi.kubevirt.io=importer
                prometheus.cdi.kubevirt.io=
Annotations:    cdi.kubevirt.io/storage.createdByController: yes
                openshift.io/scc: containerized-data-importer
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  PersistentVolumeClaim/a3-8842557a-e3e9-4da2-bd23-5339dac8026e
Containers:
  importer:
    Container ID:  
    Image:         registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-importer@sha256:428737e5c3b4122e9d3eb06551fb197358ef893cd26e0b591e445d2523e22765
    Image ID:      
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      -v=1
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     0
      memory:  0
    Requests:
      cpu:     0
      memory:  0
    Environment:
      IMPORTER_SOURCE:         imageio
      IMPORTER_ENDPOINT:       https://rhev-red-03.rdu2.scalelab.redhat.com/ovirt-engine/api
      IMPORTER_CONTENTTYPE:    kubevirt
      IMPORTER_IMAGE_SIZE:     200Gi
      OWNER_UID:               7e0f28ba-8957-452f-99bb-cac6b765d489
      INSECURE_TLS:            false
      IMPORTER_DISK_ID:        8842557a-e3e9-4da2-bd23-5339dac8026e
      IMPORTER_ACCESS_KEY_ID:  <set to the key 'accessKeyId' in secret 'vmimport.v2v.kubevirt.iopg7s8'>  Optional: false
      IMPORTER_SECRET_KEY:     <set to the key 'secretKey' in secret 'vmimport.v2v.kubevirt.iopg7s8'>    Optional: false
      IMPORTER_CERT_DIR:       /certs
    Mounts:
      /certs from cdi-cert-vol (rw)
      /data from cdi-data-vol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-pmhzl (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  cdi-data-vol:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  a3-8842557a-e3e9-4da2-bd23-5339dac8026e
    ReadOnly:   false
  cdi-cert-vol:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      vmimport.v2v.kubevirt.iojqxw5
    Optional:  false
  default-token-pmhzl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-pmhzl
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age                   From                                 Message
  ----     ------       ----                  ----                                 -------
  Normal   Scheduled    37m                   default-scheduler                    Successfully assigned amastbau1/importer-a3-8842557a-e3e9-4da2-bd23-5339dac8026e to istein2-fg47m-worker-mb8br
  Warning  FailedMount  34m                   kubelet, istein2-fg47m-worker-mb8br  MountVolume.MountDevice failed for volume "pvc-163d019e-1112-48c6-8ca1-dc4a8d72dfbf" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
  Warning  FailedMount  26m                   kubelet, istein2-fg47m-worker-mb8br  Unable to attach or mount volumes: unmounted volumes=[cdi-data-vol], unattached volumes=[default-token-pmhzl cdi-data-vol cdi-cert-vol]: timed out waiting for the condition
  Warning  FailedMount  9m45s (x19 over 34m)  kubelet, istein2-fg47m-worker-mb8br  MountVolume.MountDevice failed for volume "pvc-163d019e-1112-48c6-8ca1-dc4a8d72dfbf" : rpc error: code = Aborted desc = an operation with the given Volume ID 0001-0011-openshift-storage-0000000000000001-c06cdbff-c756-11ea-a9e7-0a580a81021e already exists
  Warning  FailedMount  3m46s (x13 over 35m)  kubelet, istein2-fg47m-worker-mb8br  Unable to attach or mount volumes: unmounted volumes=[cdi-data-vol], unattached volumes=[cdi-data-vol cdi-cert-vol default-token-pmhzl]: timed out waiting for the condition
[root@puma44 amastbaum]# oc describe pods/importer-a3-8842557a-e3e9-4da2-bd23-5339dac8026e
[root@puma44 amastbaum]# oc describe pods/importer-a3-8842557a-e3e9-4da2-bd23-5339dac8026e
Name:           importer-a3-8842557a-e3e9-4da2-bd23-5339dac8026e
Namespace:      amastbau1
Priority:       0
Node:           istein2-fg47m-worker-mb8br/192.168.3.64
Start Time:     Thu, 16 Jul 2020 15:13:21 +0300
Labels:         app=containerized-data-importer
                cdi.kubevirt.io=importer
                prometheus.cdi.kubevirt.io=
Annotations:    cdi.kubevirt.io/storage.createdByController: yes
                openshift.io/scc: containerized-data-importer
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  PersistentVolumeClaim/a3-8842557a-e3e9-4da2-bd23-5339dac8026e
Containers:
  importer:
    Container ID:  
    Image:         registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-importer@sha256:428737e5c3b4122e9d3eb06551fb197358ef893cd26e0b591e445d2523e22765
    Image ID:      
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      -v=1
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     0
      memory:  0
    Requests:
      cpu:     0
      memory:  0
    Environment:
      IMPORTER_SOURCE:         imageio
      IMPORTER_ENDPOINT:       https://rhev-red-03.rdu2.scalelab.redhat.com/ovirt-engine/api
      IMPORTER_CONTENTTYPE:    kubevirt
      IMPORTER_IMAGE_SIZE:     200Gi
      OWNER_UID:               7e0f28ba-8957-452f-99bb-cac6b765d489
      INSECURE_TLS:            false
      IMPORTER_DISK_ID:        8842557a-e3e9-4da2-bd23-5339dac8026e
      IMPORTER_ACCESS_KEY_ID:  <set to the key 'accessKeyId' in secret 'vmimport.v2v.kubevirt.iopg7s8'>  Optional: false
      IMPORTER_SECRET_KEY:     <set to the key 'secretKey' in secret 'vmimport.v2v.kubevirt.iopg7s8'>    Optional: false
      IMPORTER_CERT_DIR:       /certs
    Mounts:
      /certs from cdi-cert-vol (rw)
      /data from cdi-data-vol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-pmhzl (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  cdi-data-vol:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  a3-8842557a-e3e9-4da2-bd23-5339dac8026e
    ReadOnly:   false
  cdi-cert-vol:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      vmimport.v2v.kubevirt.iojqxw5
    Optional:  false
  default-token-pmhzl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-pmhzl
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age                   From                                 Message
  ----     ------       ----                  ----                                 -------
  Normal   Scheduled    37m                   default-scheduler                    Successfully assigned amastbau1/importer-a3-8842557a-e3e9-4da2-bd23-5339dac8026e to istein2-fg47m-worker-mb8br
  Warning  FailedMount  34m                   kubelet, istein2-fg47m-worker-mb8br  MountVolume.MountDevice failed for volume "pvc-163d019e-1112-48c6-8ca1-dc4a8d72dfbf" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
  Warning  FailedMount  26m                   kubelet, istein2-fg47m-worker-mb8br  Unable to attach or mount volumes: unmounted volumes=[cdi-data-vol], unattached volumes=[default-token-pmhzl cdi-data-vol cdi-cert-vol]: timed out waiting for the condition
  Warning  FailedMount  9m58s (x19 over 34m)  kubelet, istein2-fg47m-worker-mb8br  MountVolume.MountDevice failed for volume "pvc-163d019e-1112-48c6-8ca1-dc4a8d72dfbf" : rpc error: code = Aborted desc = an operation with the given Volume ID 0001-0011-openshift-storage-0000000000000001-c06cdbff-c756-11ea-a9e7-0a580a81021e already exists
  Warning  FailedMount  3m59s (x13 over 35m)  kubelet, istein2-fg47m-worker-mb8br  Unable to attach or mount volumes: unmounted volumes=[cdi-data-vol], unattached volumes=[cdi-data-vol cdi-cert-vol default-token-pmhzl]: timed out waiting for the condition


[root@puma44 amastbaum]# oc describe pvc/a3-8842557a-e3e9-4da2-bd23-5339dac8026e
Name:          a3-8842557a-e3e9-4da2-bd23-5339dac8026e
Namespace:     amastbau1
StorageClass:  ocs-storagecluster-cephfs
Status:        Bound
Volume:        pvc-163d019e-1112-48c6-8ca1-dc4a8d72dfbf
Labels:        app=containerized-data-importer
Annotations:   cdi.kubevirt.io/storage.condition.running: true
               cdi.kubevirt.io/storage.condition.running.message: 
               cdi.kubevirt.io/storage.condition.running.reason: Pod is running
               cdi.kubevirt.io/storage.import.certConfigMap: vmimport.v2v.kubevirt.iojqxw5
               cdi.kubevirt.io/storage.import.diskId: 8842557a-e3e9-4da2-bd23-5339dac8026e
               cdi.kubevirt.io/storage.import.endpoint: https://rhev-red-03.rdu2.scalelab.redhat.com/ovirt-engine/api
               cdi.kubevirt.io/storage.import.importPodName: importer-a3-8842557a-e3e9-4da2-bd23-5339dac8026e
               cdi.kubevirt.io/storage.import.secretName: vmimport.v2v.kubevirt.iopg7s8
               cdi.kubevirt.io/storage.import.source: imageio
               cdi.kubevirt.io/storage.pod.phase: Running
               cdi.kubevirt.io/storage.pod.restarts: 0
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: openshift-storage.cephfs.csi.ceph.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      200Gi
Access Modes:  RWX
VolumeMode:    Filesystem
Mounted By:    importer-a3-8842557a-e3e9-4da2-bd23-5339dac8026e
Events:
  Type     Reason                 Age                 From                                                                                                                     Message
  ----     ------                 ----                ----                                                                                                                     -------
  Normal   Provisioning           91m                 openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-59b78f947-p2jht_577f5920-df82-423e-b2a6-9bf9bac75c90  External provisioner is provisioning volume for claim "amastbau1/a3-8842557a-e3e9-4da2-bd23-5339dac8026e"
  Normal   ExternalProvisioning   91m (x5 over 91m)   persistentvolume-controller                                                                                              waiting for a volume to be created, either by external provisioner "openshift-storage.cephfs.csi.ceph.com" or manually created by system administrator
  Normal   ProvisioningSucceeded  91m                 openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-59b78f947-p2jht_577f5920-df82-423e-b2a6-9bf9bac75c90  Successfully provisioned volume pvc-163d019e-1112-48c6-8ca1-dc4a8d72dfbf
  Warning  ErrImportFailed        68m (x4 over 68m)   import-controller                                                                                                        Unable to process data: write /data/disk.img: no space left on device
  Warning  ErrImportFailed        66m (x16 over 67m)  import-controller                                                                                                        Unable to connect to imageio data source: Fault reason is "Operation Failed". Fault detail is "[Cannot transfer Virtual Disk: The following disks are locked: amastbau.rhel8_Disk1. Please try again in a few minutes.]". HTTP response code is "409". HTTP response message is "409 Conflict".
  Warning  ErrImportFailed        42m (x5 over 42m)   import-controller                                                                                                        Unable to connect to imageio data source: Fault reason is "Operation Failed". Fault detail is "[Cannot transfer Virtual Disk: The following disks are locked: amastbau.rhel8_Disk1. Please try again in a few minutes.]". HTTP response code is "409". HTTP response message is "409 Conflict".
[root@puma44 amastbaum]# 

[root@puma44 amastbaum]# oc describe pvc/a3-8842557a-e3e9-4da2-bd23-5339dac8026e
Name:          a3-8842557a-e3e9-4da2-bd23-5339dac8026e
Namespace:     amastbau1
StorageClass:  ocs-storagecluster-cephfs
Status:        Bound
Volume:        pvc-163d019e-1112-48c6-8ca1-dc4a8d72dfbf
Labels:        app=containerized-data-importer
Annotations:   cdi.kubevirt.io/storage.condition.running: true
               cdi.kubevirt.io/storage.condition.running.message: 
               cdi.kubevirt.io/storage.condition.running.reason: Pod is running
               cdi.kubevirt.io/storage.import.certConfigMap: vmimport.v2v.kubevirt.iojqxw5
               cdi.kubevirt.io/storage.import.diskId: 8842557a-e3e9-4da2-bd23-5339dac8026e
               cdi.kubevirt.io/storage.import.endpoint: https://rhev-red-03.rdu2.scalelab.redhat.com/ovirt-engine/api
               cdi.kubevirt.io/storage.import.importPodName: importer-a3-8842557a-e3e9-4da2-bd23-5339dac8026e
               cdi.kubevirt.io/storage.import.secretName: vmimport.v2v.kubevirt.iopg7s8
               cdi.kubevirt.io/storage.import.source: imageio
               cdi.kubevirt.io/storage.pod.phase: Running
               cdi.kubevirt.io/storage.pod.restarts: 0
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: openshift-storage.cephfs.csi.ceph.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      200Gi
Access Modes:  RWX
VolumeMode:    Filesystem
Mounted By:    importer-a3-8842557a-e3e9-4da2-bd23-5339dac8026e
Events:
  Type     Reason                 Age                 From                                                                                                                     Message
  ----     ------                 ----                ----                                                                                                                     -------
  Normal   Provisioning           91m                 openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-59b78f947-p2jht_577f5920-df82-423e-b2a6-9bf9bac75c90  External provisioner is provisioning volume for claim "amastbau1/a3-8842557a-e3e9-4da2-bd23-5339dac8026e"
  Normal   ExternalProvisioning   91m (x5 over 91m)   persistentvolume-controller                                                                                              waiting for a volume to be created, either by external provisioner "openshift-storage.cephfs.csi.ceph.com" or manually created by system administrator
  Normal   ProvisioningSucceeded  91m                 openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-59b78f947-p2jht_577f5920-df82-423e-b2a6-9bf9bac75c90  Successfully provisioned volume pvc-163d019e-1112-48c6-8ca1-dc4a8d72dfbf
  Warning  ErrImportFailed        68m (x4 over 68m)   import-controller                                                                                                        Unable to process data: write /data/disk.img: no space left on device
  Warning  ErrImportFailed        66m (x16 over 67m)  import-controller                                                                                                        Unable to connect to imageio data source: Fault reason is "Operation Failed". Fault detail is "[Cannot transfer Virtual Disk: The following disks are locked: amastbau.rhel8_Disk1. Please try again in a few minutes.]". HTTP response code is "409". HTTP response message is "409 Conflict".
  Warning  ErrImportFailed        42m (x5 over 42m)   import-controller                                                                                                        Unable to connect to imageio data source: Fault reason is "Operation Failed". Fault detail is "[Cannot transfer Virtual Disk: The following disks are locked: amastbau.rhel8_Disk1. Please try again in a few minutes.]". HTTP response code is "409". HTTP response message is "409 Conflict".
[root@puma44 amastbaum]# 

[root@puma44 amastbaum]# oc describe pv/pvc-163d019e-1112-48c6-8ca1-dc4a8d72dfbf
Name:            pvc-163d019e-1112-48c6-8ca1-dc4a8d72dfbf
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    ocs-storagecluster-cephfs
Status:          Bound
Claim:           amastbau1/a3-8842557a-e3e9-4da2-bd23-5339dac8026e
Reclaim Policy:  Delete
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        200Gi
Node Affinity:   <none>
Message:         
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            openshift-storage.cephfs.csi.ceph.com
    FSType:            ext4
    VolumeHandle:      0001-0011-openshift-storage-0000000000000001-c06cdbff-c756-11ea-a9e7-0a580a81021e
    ReadOnly:          false
    VolumeAttributes:      clusterID=openshift-storage
                           fsName=ocs-storagecluster-cephfilesystem
                           storage.kubernetes.io/csiProvisionerIdentity=1594897186407-8081-openshift-storage.cephfs.csi.ceph.com
Events:                <none>

Comment 1 Ondra Machacek 2020-07-23 07:58:33 UTC
Actually this is by design, if the import fail for some reason for 3times, we stop the import, we don't overload the network/storage, for no reason. I think we can add an option to disable this feature.

Comment 2 Ondra Machacek 2020-07-23 11:14:10 UTC
We must monitor events and report to the VM Import logs and CR the errors of the CDI importer pod.

Comment 3 Brett Thurber 2020-07-31 13:43:29 UTC
*** Bug 1721504 has been marked as a duplicate of this bug. ***

Comment 4 Amos Mastbaum 2020-10-13 10:58:51 UTC
@Piotr
Was anything changed that needs to be verified here?

Comment 5 Piotr Kliczewski 2020-10-13 11:22:10 UTC
Yes, in this situation we fire an event to let the user know.

Comment 6 Ilanit Stein 2020-10-19 17:02:47 UTC
VM import of a VM with a 100GB disk to target storage NFS gets this message:

Importing (RHV)
rhel8-import is being imported.
Pending: DataVolume rhel8-import-03072434-e45b-430c-8860-ff50b0c71a2c is pending to bound 

Is this related to this fix, or do we need to verify this in another way?
(VM import of a VM with 100GB disk to Ceph-RBD/Block passed)

Comment 7 Piotr Kliczewski 2020-10-19 17:29:53 UTC
Ilanit, pending dv is expected. You should see fired alert letting user know about the situation.

Comment 8 Ilanit Stein 2020-10-29 16:10:08 UTC
Piotr,

We tried reproduce the "DataValueCreationgFailed" failure, mentioned in the bug description.
VM import of a 100GB VM to Ceph-RBD storage (of 50GB) ends up with cdi importer keep reporting 47% for some time, then it fails and begins a "New phase" that show 0% progress. 

The VM import progress bar remain all the time on 39% and it seems the error that comes from the cdi import is not propagated.

This way we cannot see the event that was added within this bug fix.

Part of the cdi importer log that we managed to capture:
1029 16:00:16.785152       1 importer.go:52] Starting importer
I1029 16:00:16.786785       1 importer.go:116] begin import process
I1029 16:00:18.495650       1 http-datasource.go:219] Attempting to get certs from /certs/ca.pem
I1029 16:00:18.546153       1 data-processor.go:302] Calculating available size
I1029 16:00:18.547157       1 data-processor.go:310] Checking out block volume size.
I1029 16:00:18.547170       1 data-processor.go:322] Request image size not empty.
I1029 16:00:18.547199       1 data-processor.go:327] Target size 100Gi.
I1029 16:00:18.547320       1 data-processor.go:224] New phase: TransferDataFile
I1029 16:00:18.548337       1 util.go:161] Writing data...
I1029 16:00:19.547661       1 prometheus.go:69] 0.00
I1029 16:00:20.547888       1 prometheus.go:69] 0.00
I1029 16:00:21.548133       1 prometheus.go:69] 0.00
I1029 16:00:22.548255       1 prometheus.go:69] 0.00
I1029 16:00:23.548440       1 prometheus.go:69] 0.00
I1029 16:00:24.548616       1 prometheus.go:69] 0.00
I1029 16:00:25.548837       1 prometheus.go:69] 0.00
I1029 16:00:26.549033       1 prometheus.go:69] 0.00
I1029 16:00:27.551421       1 prometheus.go:69] 0.00
I1029 16:00:28.551559       1 prometheus.go:69] 0.00
I1029 16:00:29.552086       1 prometheus.go:69] 0.00
I1029 16:00:30.552228       1 prometheus.go:69] 0.00
I1029 16:00:31.552421       1 prometheus.go:69] 0.00
I1029 16:00:32.552700       1 prometheus.go:69] 0.00
I1029 16:00:33.555754       1 prometheus.go:69] 0.00
I1029 16:00:34.555973       1 prometheus.go:69] 0.00
I1029 16:00:35.556195       1 prometheus.go:69] 0.00
I1029 16:00:36.556394       1 prometheus.go:69] 0.00
I1029 16:00:37.556617       1 prometheus.go:69] 0.00
I1029 16:00:38.556806       1 prometheus.go:69] 0.00
I1029 16:00:39.557024       1 prometheus.go:69] 0.00
I1029 16:00:40.557236       1 prometheus.go:69] 0.00
I1029 16:00:41.559071       1 prometheus.go:69] 0.00
I1029 16:00:42.559268       1 prometheus.go:69] 0.00
I1029 16:00:43.559453       1 prometheus.go:69] 0.00
I1029 16:00:44.559708       1 prometheus.go:69] 0.00
I1029 16:00:45.559912       1 prometheus.go:69] 0.00
I1029 16:00:46.560057       1 prometheus.go:69] 0.00
I1029 16:00:47.560253       1 prometheus.go:69] 0.00
I1029 16:00:48.560472       1 prometheus.go:69] 0.00
I1029 16:00:49.560610       1 prometheus.go:69] 0.00
I1029 16:00:50.560803       1 prometheus.go:69] 0.00
I1029 16:00:51.561134       1 prometheus.go:69] 0.00
I1029 16:00:52.561487       1 prometheus.go:69] 0.00
I1029 16:00:53.561764       1 prometheus.go:69] 0.00
I1029 16:00:54.561990       1 prometheus.go:69] 0.00
I1029 16:00:55.562177       1 prometheus.go:69] 0.00
I1029 16:00:56.562362       1 prometheus.go:69] 0.00
I1029 16:00:57.562577       1 prometheus.go:69] 0.00
I1029 16:00:58.562738       1 prometheus.go:69] 0.00
I1029 16:00:59.562968       1 prometheus.go:69] 0.00
I1029 16:01:00.563158       1 prometheus.go:69] 0.00
I1029 16:01:01.563429       1 prometheus.go:69] 0.00
I1029 16:01:02.563675       1 prometheus.go:69] 0.00
I1029 16:01:03.563918       1 prometheus.go:69] 0.00
I1029 16:01:04.564058       1 prometheus.go:69] 0.00
I1029 16:01:05.565075       1 prometheus.go:69] 0.00
I1029 16:01:06.565260       1 prometheus.go:69] 0.00
I1029 16:01:07.565495       1 prometheus.go:69] 0.00
I1029 16:01:08.565668       1 prometheus.go:69] 0.00
I1029 16:01:09.565897       1 prometheus.go:69] 0.00
I1029 16:01:10.566101       1 prometheus.go:69] 0.00
I1029 16:01:11.566318       1 prometheus.go:69] 0.00
I1029 16:01:12.566521       1 prometheus.go:69] 0.00
I1029 16:01:13.566692       1 prometheus.go:69] 0.00
I1029 16:01:14.566973       1 prometheus.go:69] 0.00
I1029 16:01:15.567099       1 prometheus.go:69] 0.00
I1029 16:01:16.567300       1 prometheus.go:69] 0.00
I1029 16:01:17.567465       1 prometheus.go:69] 0.00
I1029 16:01:18.567733       1 prometheus.go:69] 0.00
I1029 16:01:19.567940       1 prometheus.go:69] 0.00
I1029 16:01:20.568195       1 prometheus.go:69] 0.00
I1029 16:01:21.568411       1 prometheus.go:69] 0.00
I1029 16:01:22.571107       1 prometheus.go:69] 0.00
I1029 16:01:23.571320       1 prometheus.go:69] 0.00
I1029 16:01:24.571777       1 prometheus.go:69] 0.00
I1029 16:01:25.571990       1 prometheus.go:69] 0.00
I1029 16:01:26.573141       1 prometheus.go:69] 0.00
I1029 16:01:27.573287       1 prometheus.go:69] 0.00
I1029 16:01:28.573480       1 prometheus.go:69] 0.00
I1029 16:01:29.573693       1 prometheus.go:69] 0.00
I1029 16:01:30.573856       1 prometheus.go:69] 0.00
I1029 16:01:31.574030       1 prometheus.go:69] 0.00
I1029 16:01:32.574762       1 prometheus.go:69] 0.00
I1029 16:01:33.575052       1 prometheus.go:69] 0.00
I1029 16:01:34.575291       1 prometheus.go:69] 0.00
I1029 16:01:35.575552       1 prometheus.go:69] 0.00
I1029 16:01:36.575810       1 prometheus.go:69] 0.00
I1029 16:01:37.576122       1 prometheus.go:69] 0.00
I1029 16:01:38.576379       1 prometheus.go:69] 0.00
I1029 16:01:39.576583       1 prometheus.go:69] 0.00
I1029 16:01:40.576759       1 prometheus.go:69] 0.00
I1029 16:01:41.577000       1 prometheus.go:69] 0.00
I1029 16:01:42.577162       1 prometheus.go:69] 0.00
I1029 16:01:43.577405       1 prometheus.go:69] 0.00
I1029 16:01:44.577853       1 prometheus.go:69] 0.00
I1029 16:01:45.578084       1 prometheus.go:69] 0.00
I1029 16:01:46.578278       1 prometheus.go:69] 0.00
I1029 16:01:47.578489       1 prometheus.go:69] 0.00
I1029 16:01:48.578672       1 prometheus.go:69] 0.00
I1029 16:01:49.578863       1 prometheus.go:69] 0.00
I1029 16:01:50.579212       1 prometheus.go:69] 0.00
I1029 16:01:51.579395       1 prometheus.go:69] 0.00
I1029 16:01:52.579575       1 prometheus.go:69] 0.00
I1029 16:01:53.579735       1 prometheus.go:69] 0.00
I1029 16:01:54.579915       1 prometheus.go:69] 0.00
I1029 16:01:55.580071       1 prometheus.go:69] 0.00
I1029 16:01:56.580188       1 prometheus.go:69] 0.00
I1029 16:01:57.581113       1 prometheus.go:69] 0.00
I1029 16:01:58.581293       1 prometheus.go:69] 0.00
I1029 16:01:59.581486       1 prometheus.go:69] 0.00
I1029 16:02:00.581717       1 prometheus.go:69] 0.00
I1029 16:02:01.581931       1 prometheus.go:69] 0.00
I1029 16:02:02.582153       1 prometheus.go:69] 0.00
I1029 16:02:03.582466       1 prometheus.go:69] 0.00
I1029 16:02:04.582749       1 prometheus.go:69] 0.00
I1029 16:02:05.584099       1 prometheus.go:69] 0.00
I1029 16:02:06.584362       1 prometheus.go:69] 0.00
I1029 16:02:07.584597       1 prometheus.go:69] 0.00
I1029 16:02:08.584844       1 prometheus.go:69] 0.00
I1029 16:02:09.585042       1 prometheus.go:69] 0.01
I1029 16:02:10.585218       1 prometheus.go:69] 0.01
I1029 16:02:11.585402       1 prometheus.go:69] 0.01
I1029 16:02:12.585624       1 prometheus.go:69] 0.01
I1029 16:02:13.586616       1 prometheus.go:69] 0.01
I1029 16:02:14.587153       1 prometheus.go:69] 0.01
I1029 16:02:15.587334       1 prometheus.go:69] 0.01
I1029 16:02:16.587566       1 prometheus.go:69] 0.01
I1029 16:02:17.587789       1 prometheus.go:69] 0.01
I1029 16:02:18.588048       1 prometheus.go:69] 0.01
I1029 16:02:19.588224       1 prometheus.go:69] 0.01
I1029 16:02:20.589201       1 prometheus.go:69] 0.01
I1029 16:02:21.589386       1 prometheus.go:69] 0.01
I1029 16:02:22.589686       1 prometheus.go:69] 0.01
I1029 16:02:23.589923       1 prometheus.go:69] 0.01
I1029 16:02:24.590244       1 prometheus.go:69] 0.01
I1029 16:02:25.590439       1 prometheus.go:69] 0.01
I1029 16:02:26.590754       1 prometheus.go:69] 0.01
I1029 16:02:27.590971       1 prometheus.go:69] 0.01
I1029 16:02:28.592463       1 prometheus.go:69] 0.01
I1029 16:02:29.592729       1 prometheus.go:69] 0.01
I1029 16:02:30.593005       1 prometheus.go:69] 0.01
I1029 16:02:31.593168       1 prometheus.go:69] 0.01
I1029 16:02:32.593392       1 prometheus.go:69] 0.01
I1029 16:02:33.593583       1 prometheus.go:69] 0.01
I1029 16:02:34.593773       1 prometheus.go:69] 0.01
I1029 16:02:35.593995       1 prometheus.go:69] 0.01
I1029 16:02:36.594152       1 prometheus.go:69] 0.01
I1029 16:02:37.594374       1 prometheus.go:69] 0.01
I1029 16:02:38.596154       1 prometheus.go:69] 0.01
I1029 16:02:39.596341       1 prometheus.go:69] 0.01
I1029 16:02:40.596530       1 prometheus.go:69] 0.01
I1029 16:02:41.599935       1 prometheus.go:69] 0.01
I1029 16:02:42.600159       1 prometheus.go:69] 0.01
I1029 16:02:43.600308       1 prometheus.go:69] 0.01
I1029 16:02:44.600551       1 prometheus.go:69] 0.01
I1029 16:02:45.600738       1 prometheus.go:69] 0.01
I1029 16:02:46.600898       1 prometheus.go:69] 0.01
I1029 16:02:47.601140       1 prometheus.go:69] 0.01
I1029 16:02:48.601328       1 prometheus.go:69] 0.01
I1029 16:02:49.601819       1 prometheus.go:69] 0.01
I1029 16:02:50.602078       1 prometheus.go:69] 0.01
I1029 16:02:51.602348       1 prometheus.go:69] 0.01
I1029 16:02:52.602565       1 prometheus.go:69] 0.01
I1029 16:02:53.602819       1 prometheus.go:69] 0.01
I1029 16:02:54.603045       1 prometheus.go:69] 0.01
I1029 16:02:55.603321       1 prometheus.go:69] 0.01
I1029 16:02:56.603572       1 prometheus.go:69] 0.01
I1029 16:02:57.603779       1 prometheus.go:69] 0.01
E1029 16:02:58.540140       1 util.go:163] Unable to write file from dataReader: unexpected EOF
E1029 16:02:58.540307       1 data-processor.go:221] unexpected EOF
unable to write to file
kubevirt.io/containerized-data-importer/pkg/util.StreamDataToFile
	/go/src/kubevirt.io/containerized-data-importer/pkg/util/util.go:165
kubevirt.io/containerized-data-importer/pkg/importer.(*ImageioDataSource).TransferFile
	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/imageio-datasource.go:115
kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause
	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:191
kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData
	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:153
main.main
	/go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:171
runtime.main
	/usr/lib/golang/src/runtime/proc.go:203
runtime.goexit
	/usr/lib/golang/src/runtime/asm_amd64.s:1357
Unable to transfer source data to target file
kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause
	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:193
kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData
	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:153
main.main
	/go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:171
runtime.main
	/usr/lib/golang/src/runtime/proc.go:203
runtime.goexit
	/usr/lib/golang/src/runtime/asm_amd64.s:1357
E1029 16:02:58.540491       1 importer.go:173] unexpected EOF
unable to write to file
kubevirt.io/containerized-data-importer/pkg/util.StreamDataToFile
	/go/src/kubevirt.io/containerized-data-importer/pkg/util/util.go:165
kubevirt.io/containerized-data-importer/pkg/importer.(*ImageioDataSource).TransferFile
	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/imageio-datasource.go:115
kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause
	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:191
kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData
	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:153
main.main
	/go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:171
runtime.main
	/usr/lib/golang/src/runtime/proc.go:203
runtime.goexit
	/usr/lib/golang/src/runtime/asm_amd64.s:1357
Unable to transfer source data to target file
kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause
	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:193
kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData
	/go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:153
main.main
	/go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:171
runtime.main
	/usr/lib/golang/src/runtime/proc.go:203
runtime.goexit
	/usr/lib/golang/src/runtime/asm_amd64.s:1357

Is this another bug - that the error is not propogated,
and also is there another way you would suggest to verify this bug?

Comment 9 Ilanit Stein 2020-10-29 16:12:34 UTC
Adding that after the cdi importer pod gets into a "crash loopback" state it is going back to status "running".

Comment 10 Piotr Kliczewski 2020-10-29 16:40:47 UTC
Is this information available on DV? Was this event (https://github.com/kubevirt/vm-import-operator/pull/367/files#diff-88694056f107609ecae52379508b2296457f11ddeb1355ab316a87b9ae21ae91R410) fired?

Comment 11 Ilanit Stein 2020-10-29 17:38:01 UTC
This event is not fired.

What happens eventually is that after a couple of "New phase" trials of the cdi importer pod, it turned into a "Terminating" state.

The VM import was removed automatically.

1. Seems that the cdi importer behavior has changed since this bug was reported.
2. I'll file a separate bug for the current behavior.

Comment 12 Piotr Kliczewski 2020-10-30 07:48:46 UTC
Please create new bug if you see any undesired behaviour. Would you mind providing list of recent events?

The condition created checks container exit code so if cdi behaviour has changed it could be not met.

Comment 15 Ilanit Stein 2020-11-02 10:53:17 UTC
Closing this bug since it cannot be verified.
The event added in this bug is not reached. The importer error that should have triggered it did not occur.


Note You need to log in before you can comment on or make changes to this bug.