Description of problem: * Created a 1Gi golden fedora pvc, which imported Fedora cloud qcow2 image. * Created a DataVolume, sized 1Gi, with source the PVC above, it failed with "no space left on device". * Increased the DataVolume size to 8Gi (just to make sure) and it worked. The cloning process seems to make the resulting file a non-sparse one, mounted both LVs on one of the cluster nodes to run qemu-img info on them and here are the results: [root@node02 ~]# qemu-img info /mnt/1/brick/disk.img image: /mnt/1/brick/disk.img file format: raw virtual size: 4.0G (4294967296 bytes) disk size: 778M [root@node02 ~]# qemu-img info /mnt/2/brick/disk.img image: /mnt/2/brick/disk.img file format: raw virtual size: 4.0G (4294967296 bytes) disk size: 4.0G Looking at the LVs, both belong to the same VG and doesn't seem to be a single difference in the settings: [root@node02 ~]# lvdisplay /dev/vg_729faf60b0a3b879b051db5abc4e94a1/brick_ec125589977d4a629c7020436615f4f6 --- Logical volume --- LV Path /dev/vg_729faf60b0a3b879b051db5abc4e94a1/brick_ec125589977d4a629c7020436615f4f6 LV Name brick_ec125589977d4a629c7020436615f4f6 VG Name vg_729faf60b0a3b879b051db5abc4e94a1 LV UUID 4YcvuU-Bxaq-ScDv-qpww-OIsF-J611-yan3oi LV Write Access read/write LV Creation host, time node02.example.com, 2018-12-12 06:35:54 -0500 LV Pool name tp_ec125589977d4a629c7020436615f4f6 LV Status available # open 1 LV Size 8.00 GiB Mapped size 50.17% Current LE 2048 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 252:19 [root@node02 ~]# lvdisplay /dev/vg_729faf60b0a3b879b051db5abc4e94a1/brick_0e488dded91b45787fa21a742b792753 --- Logical volume --- LV Path /dev/vg_729faf60b0a3b879b051db5abc4e94a1/brick_0e488dded91b45787fa21a742b792753 LV Name brick_0e488dded91b45787fa21a742b792753 VG Name vg_729faf60b0a3b879b051db5abc4e94a1 LV UUID 5b7lTN-iUav-Cxm1-t5Fp-5WLL-pqqL-ZktxPu LV Write Access read/write LV Creation host, time node02.example.com, 2018-12-11 12:02:38 -0500 LV Pool name tp_0e488dded91b45787fa21a742b792753 LV Status available # open 1 LV Size 1.00 GiB Mapped size 78.98% Current LE 256 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 252:14 Also, running xfs_info against the LVs show the same settings: [root@node02 ~]# xfs_info /dev/vg_729faf60b0a3b879b051db5abc4e94a1/brick_ec125589977d4a629c7020436615f4f6 meta-data=/dev/mapper/vg_729faf60b0a3b879b051db5abc4e94a1-brick_ec125589977d4a629c7020436615f4f6 isize=512 agcount=8, agsize=262144 blk = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 data = bsize=4096 blocks=2097152, imaxpct=25 = sunit=64 swidth=64 blks naming =version 2 bsize=8192 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=64 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@node02 ~]# xfs_info /dev/vg_729faf60b0a3b879b051db5abc4e94a1/brick_0e488dded91b45787fa21a742b792753 meta-data=/dev/mapper/vg_729faf60b0a3b879b051db5abc4e94a1-brick_0e488dded91b45787fa21a742b792753 isize=512 agcount=8, agsize=32768 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 data = bsize=4096 blocks=262144, imaxpct=25 = sunit=64 swidth=64 blks naming =version 2 bsize=8192 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=64 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 So the problem doesn't seem to be related to either filesystem or LVs. Version-Release number of selected component (if applicable): OCP: v3.11.43 CDI: v1.3.0 OCS images: registry.redhat.io/rhgs3/rhgs-server-rhel7 latest 53c83497482a2 431MB registry.redhat.io/rhgs3/rhgs-volmanager-rhel7 latest 64d4857090bd8 327MB How reproducible: Always. Steps to Reproduce: 1. 2. 3. Actual results: cloner pod ends in *Error* [root@workstation-ddc4 ~]# oc logs -f clone-target-pod-rbwlb -n 201dv cloner: Starting clone target cloner: check if the fifo pipe was created by the cloning source pod /tmp/clone/image / cloner: extract the image from /tmp/clone/socket/c662a4c2-fe01-11e8-8133-2cabcdef0010/pipe into /tmp/clone/image directory ./ ./disk.img tar: ./disk.img: Cannot write: No space left on device tar: ./disk.img: Cannot utime: No space left on device Expected results: Is this the expected result? Or the resulting file should be a sparse file as well? Additional info: * Golden PVC definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: cdi.kubevirt.io/storage.import.endpoint: https://download.fedoraproject.org/pub/fedora/linux/releases/29/Cloud/x86_64/images/Fedora-Cloud-Base-29-1.2.x86_64.qcow2 cdi.kubevirt.io/storage.import.importPodName: importer-golden-fedora-pvc-vdmmq cdi.kubevirt.io/storage.pod.phase: Succeeded kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"cdi.kubevirt.io/storage.import.endpoint":"https://download.fedoraproject.org/pub/fedora/linux/releases/29/Cloud/x86_64/images/Fedora-Cloud-Base-29-1.2.x86_64.qcow2"},"labels":{"app":"containerized-data-importer"},"name":"golden-fedora-pvc","namespace":"201dvsource"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}}}} pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs creationTimestamp: 2018-12-11T17:02:38Z finalizers: - kubernetes.io/pvc-protection labels: app: containerized-data-importer name: golden-fedora-pvc namespace: 201dvsource resourceVersion: "33005" selfLink: /api/v1/namespaces/201dvsource/persistentvolumeclaims/golden-fedora-pvc uid: 904b11db-fd66-11e8-869d-2cabcdef0010 spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: glusterfs-storage volumeName: pvc-904b11db-fd66-11e8-869d-2cabcdef0010 status: accessModes: - ReadWriteOnce capacity: storage: 1Gi phase: Bound * DataVolume definition apiVersion: cdi.kubevirt.io/v1alpha1 kind: DataVolume metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"cdi.kubevirt.io/v1alpha1","kind":"DataVolume","metadata":{"annotations":{},"name":"example-pvc-dv","namespace":"201dv"},"spec":{"pvc":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"8Gi"}}},"source":{"pvc":{"name":"golden-fedora-pvc","namespace":"201dvsource"}}}} creationTimestamp: 2018-12-12T11:35:54Z generation: 1 name: example-pvc-dv namespace: 201dv resourceVersion: "55568" selfLink: /apis/cdi.kubevirt.io/v1alpha1/namespaces/201dv/datavolumes/example-pvc-dv uid: 15d83cc9-fe02-11e8-8133-2cabcdef0010 spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 8Gi source: pvc: name: golden-fedora-pvc namespace: 201dvsource status: phase: Succeeded NOTE: This is the DV that succeded, the original had `requests.storage: 1Gi`.
*** This bug has been marked as a duplicate of bug 1658615 ***