Description of problem: Disk resize, resizes the image to a smaller size than expected. Version-Release number of selected component: virt-cdi-importer:v1.4.0 virt-cdi-cloner:v1.4.0 virt-cdi-uploadserver:v1.4.0 virt-cdi-controller:v1.4.0-7 How reproducible: 100% Steps to Reproduce: 1. Create a PVC: oc create -f pvc.yaml Example yaml: --- apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: cdi.kubevirt.io/storage.contentType: kubevirt cdi.kubevirt.io/storage.import.endpoint: "https://download.fedoraproject.org/pub/fedora/linux/releases/29/Cloud/x86_64/images/Fedora-Cloud-Base-29-1.2.x86_64.raw.xz" cdi.kubevirt.io/storage.import.secretName: "" cdi.kubevirt.io/storage.import.source: http name: pvc-on-gluster-for-vmi-fedora spec: accessModes: - ReadWriteOnce resources: requests: storage: 12Gi selector: ~ storageClassName: glusterfs-storage 2. Check disk size Actual results: Disk size is substantially smaller than 12Gi, it's around 8Gi. Expected results: The disk size should be ~12Gi. Additional info: the imported image size: $ ll total 394192 -rw-r--r--@ 1 ngavrilo staff 194278292 Oct 25 03:00 Fedora-Cloud-Base-29-1.2.x86_64.raw.xz $ xz -d Fedora-Cloud-Base-29-1.2.x86_64.raw.xz $ ll -h total 2104824 -rw-r--r-- 1 ngavrilo staff 4.0G Oct 25 03:00 Fedora-Cloud-Base-29-1.2.x86_64.raw Output: $ oc create -f pvc-test-gluster.yaml persistentvolumeclaim "pvc-on-gluster-for-vmi-fedora" created $ oc get pods -w NAME READY STATUS RESTARTS AGE docker-registry-1-j5cts 1/1 Running 0 11h importer-pvc-on-gluster-for-vmi-fedora-6nsvs 0/1 Pending 0 6s local-volume-provisioner-jccvk 1/1 Running 0 46m local-volume-provisioner-p48h4 1/1 Running 0 46m local-volume-provisioner-xd684 1/1 Running 0 46m registry-console-1-z6hsg 1/1 Running 0 11h router-1-rzlst 1/1 Running 0 11h importer-pvc-on-gluster-for-vmi-fedora-6nsvs 0/1 Pending 0 12s importer-pvc-on-gluster-for-vmi-fedora-6nsvs 0/1 ContainerCreating 0 12s importer-pvc-on-gluster-for-vmi-fedora-6nsvs 0/1 ContainerCreating 0 20s importer-pvc-on-gluster-for-vmi-fedora-6nsvs 1/1 Running 0 21s $ oc logs -f importer-pvc-on-gluster-for-vmi-fedora-6nsvs I0210 22:24:39.597762 1 importer.go:45] Starting importer I0210 22:24:39.598048 1 importer.go:64] begin import process I0210 22:24:39.598054 1 importer.go:88] begin import process I0210 22:24:39.598079 1 dataStream.go:293] copying "https://download.fedoraproject.org/pub/fedora/linux/releases/29/Cloud/x86_64/images/Fedora-Cloud-Base-29-1.2.x86_64.raw.xz" to "/data/disk.img"... E0210 22:24:40.100489 1 dataStream.go:617] isoSize: Atoi error on endpoint "/pub/fedora/linux/releases/29/Cloud/x86_64/images/Fedora-Cloud-Base-29-1.2.x86_64.raw.xz": strconv.Atoi: parsing "f5": invalid syntax I0210 22:24:40.113342 1 util.go:38] begin import... I0210 22:27:10.308678 1 prlimit.go:107] ExecWithLimits qemu-img, [info --output=json /data/disk.img] W0210 22:27:11.322559 1 dataStream.go:343] Available space less than requested size, resizing image to available space 8415285248. I0210 22:27:11.322588 1 dataStream.go:349] Expanding image size to: 8415285248 I0210 22:27:11.323975 1 prlimit.go:107] ExecWithLimits qemu-img, [resize -f raw /data/disk.img 8415285248] I0210 22:27:14.154757 1 importer.go:95] import complete $ oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-9ce702c9-2d82-11e9-a6af-fa163e5ae887 12Gi RWO Delete Bound default/pvc-on-gluster-for-vmi-fedora glusterfs-storage 9m $ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-on-gluster-for-vmi-fedora Bound pvc-9ce702c9-2d82-11e9-a6af-fa163e5ae887 12Gi RWO glusterfs-storage 34m $ oc get pods -n glusterfs NAME READY STATUS RESTARTS AGE glusterblock-storage-provisioner-dc-1-2rqsb 1/1 Running 0 11h glusterfs-storage-bskvf 1/1 Running 0 11h glusterfs-storage-fbzgk 1/1 Running 0 11h glusterfs-storage-w985n 1/1 Running 0 11h heketi-storage-1-8gmbq 1/1 Running 0 11h $ oc rsh -n glusterfs glusterfs-storage-bskvf sh-4.2# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 40G 0 disk `-vda1 253:1 0 40G 0 part /var/lib/misc/glusterfsd vdb 253:16 0 25G 0 disk |-vg_b54ed58b6d3c118f61e09963e7113e72-tp_72cbe443ea43777fb03adaf9bef1dbfd_tmeta 252:0 0 12M 0 lvm | `-vg_b54ed58b6d3c118f61e09963e7113e72-tp_72cbe443ea43777fb03adaf9bef1dbfd-tpool 252:2 0 2G 0 lvm | |-vg_b54ed58b6d3c118f61e09963e7113e72-tp_72cbe443ea43777fb03adaf9bef1dbfd 252:3 0 2G 0 lvm | `-vg_b54ed58b6d3c118f61e09963e7113e72-brick_72cbe443ea43777fb03adaf9bef1dbfd 252:4 0 2G 0 lvm /var/lib/heketi/mounts/vg_b54ed58b6d3c118f61e09963e7113e72/brick_72cbe44 |-vg_b54ed58b6d3c118f61e09963e7113e72-tp_72cbe443ea43777fb03adaf9bef1dbfd_tdata 252:1 0 2G 0 lvm | `-vg_b54ed58b6d3c118f61e09963e7113e72-tp_72cbe443ea43777fb03adaf9bef1dbfd-tpool 252:2 0 2G 0 lvm | |-vg_b54ed58b6d3c118f61e09963e7113e72-tp_72cbe443ea43777fb03adaf9bef1dbfd 252:3 0 2G 0 lvm | `-vg_b54ed58b6d3c118f61e09963e7113e72-brick_72cbe443ea43777fb03adaf9bef1dbfd 252:4 0 2G 0 lvm /var/lib/heketi/mounts/vg_b54ed58b6d3c118f61e09963e7113e72/brick_72cbe44 |-vg_b54ed58b6d3c118f61e09963e7113e72-tp_68bb61f6a73ba1aa6753318a03ab523a_tmeta 252:8 0 64M 0 lvm | `-vg_b54ed58b6d3c118f61e09963e7113e72-tp_68bb61f6a73ba1aa6753318a03ab523a-tpool 252:10 0 12G 0 lvm | |-vg_b54ed58b6d3c118f61e09963e7113e72-tp_68bb61f6a73ba1aa6753318a03ab523a 252:11 0 12G 0 lvm | `-vg_b54ed58b6d3c118f61e09963e7113e72-brick_68bb61f6a73ba1aa6753318a03ab523a 252:12 0 12G 0 lvm /var/lib/heketi/mounts/vg_b54ed58b6d3c118f61e09963e7113e72/brick_68bb61f `-vg_b54ed58b6d3c118f61e09963e7113e72-tp_68bb61f6a73ba1aa6753318a03ab523a_tdata 252:9 0 12G 0 lvm `-vg_b54ed58b6d3c118f61e09963e7113e72-tp_68bb61f6a73ba1aa6753318a03ab523a-tpool 252:10 0 12G 0 lvm |-vg_b54ed58b6d3c118f61e09963e7113e72-tp_68bb61f6a73ba1aa6753318a03ab523a 252:11 0 12G 0 lvm `-vg_b54ed58b6d3c118f61e09963e7113e72-brick_68bb61f6a73ba1aa6753318a03ab523a 252:12 0 12G 0 lvm /var/lib/heketi/mounts/vg_b54ed58b6d3c118f61e09963e7113e72/brick_68bb61f vdc 253:32 0 20G 0 disk |-vg_local_storage-lv_local1 252:5 0 6.6G 0 lvm |-vg_local_storage-lv_local2 252:6 0 6.6G 0 lvm `-vg_local_storage-lv_local3 252:7 0 6.6G 0 lvm sh-4.2# find . -name disk.img ./var/lib/heketi/mounts/vg_b54ed58b6d3c118f61e09963e7113e72/brick_68bb61f6a73ba1aa6753318a03ab523a/brick/disk.img sh-4.2# ls -lh ./var/lib/heketi/mounts/vg_b54ed58b6d3c118f61e09963e7113e72/brick_68bb61f6a73ba1aa6753318a03ab523a/brick/disk.img -rwxr-xr-x. 2 1000000000 2000 7.9G Feb 10 22:27 ./var/lib/heketi/mounts/vg_b54ed58b6d3c118f61e09963e7113e72/brick_68bb61f6a73ba1aa6753318a03ab523a/brick/disk.img sh-4.2# df -h Filesystem Size Used Avail Use% Mounted on overlay 40G 9.3G 31G 24% / devtmpfs 3.8G 0 3.8G 0% /dev tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs 3.9G 6.2M 3.9G 1% /run/lvm /dev/vda1 40G 9.3G 31G 24% /run tmpfs 3.9G 16K 3.9G 1% /run/secrets/kubernetes.io/serviceaccount /dev/mapper/vg_b54ed58b6d3c118f61e09963e7113e72-brick_72cbe443ea43777fb03adaf9bef1dbfd 2.0G 33M 2.0G 2% /var/lib/heketi/mounts/vg_b54ed58b6d3c118f61e09963e7113e72/brick_72cbe443ea43777fb03adaf9bef1dbfd /dev/mapper/vg_b54ed58b6d3c118f61e09963e7113e72-brick_68bb61f6a73ba1aa6753318a03ab523a 12G 4.1G 8.0G 34% /var/lib/heketi/mounts/vg_b54ed58b6d3c118f61e09963e7113e72/brick_68bb61f6a73ba1aa6753318a03ab523a
Verified, now disk size is as requested in the yaml.
OK, I have to reopen this one, I wasn't using raw.xz for this verification - and apparently, it is crucial! Now when I used this image: https://download.fedoraproject.org/pub/fedora/linux/releases/29/Cloud/x86_64/images/Fedora-Cloud-Base-29-1.2.x86_64.raw.xz and with local storage, the importer pod log shows: $ oc logs -f importer-pvc-on-hdd-for-vmi-fedora-wskln I0218 16:36:47.443678 1 importer.go:45] Starting importer I0218 16:36:47.444377 1 importer.go:64] begin import process I0218 16:36:47.444393 1 importer.go:88] begin import process I0218 16:36:47.444409 1 dataStream.go:293] copying "https://download.fedoraproject.org/pub/fedora/linux/releases/29/Cloud/x86_64/images/Fedora-Cloud-Base-29-1.2.x86_64.raw.xz" to "/data/disk.img"... E0218 16:36:49.390983 1 dataStream.go:617] isoSize: Atoi error on endpoint "/pub/fedora/linux/releases/29/Cloud/x86_64/images/Fedora-Cloud-Base-29-1.2.x86_64.raw.xz": strconv.Atoi: parsing "f5": invalid syntax I0218 16:36:49.420779 1 util.go:38] begin import... I0218 16:38:59.604893 1 prlimit.go:107] ExecWithLimits qemu-img, [info --output=json /data/disk.img] W0218 16:38:59.645121 1 dataStream.go:343] Available space less than requested size, resizing image to available space 2141646848. I0218 16:38:59.645151 1 dataStream.go:349] Expanding image size to: 2141646848 I0218 16:38:59.645160 1 prlimit.go:107] ExecWithLimits qemu-img, [resize -f raw /data/disk.img 2141646848] I0218 16:39:00.162643 1 importer.go:95] import complete The yaml used: $ oc get pvc pvc-on-hdd-for-vmi-fedora -o yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: cdi.kubevirt.io/storage.contentType: kubevirt cdi.kubevirt.io/storage.import.endpoint: https://download.fedoraproject.org/pub/fedora/linux/releases/29/Cloud/x86_64/images/Fedora-Cloud-Base-29-1.2.x86_64.raw.xz cdi.kubevirt.io/storage.import.importPodName: importer-pvc-on-hdd-for-vmi-fedora-wskln cdi.kubevirt.io/storage.import.secretName: "" cdi.kubevirt.io/storage.import.source: http cdi.kubevirt.io/storage.pod.phase: Succeeded pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" creationTimestamp: 2019-02-18T16:37:04Z finalizers: - kubernetes.io/pvc-protection labels: app: containerized-data-importer name: pvc-on-hdd-for-vmi-fedora namespace: default resourceVersion: "2619992" selfLink: /api/v1/namespaces/default/persistentvolumeclaims/pvc-on-hdd-for-vmi-fedora uid: 6cb5dcf8-339b-11e9-a6af-fa163e5ae887 spec: accessModes: - ReadWriteOnce resources: requests: storage: 6Gi storageClassName: hdd volumeMode: Filesystem volumeName: local-pv-6ba3b9a9 status: accessModes: - ReadWriteOnce capacity: storage: 6521Mi phase: Bound Notes: - The actual space available is around 6.5Gi - The requested PVC size is 6Gi - The actual disk.img size is 2Gi node2 ~]$ ll -h /mnt/local-storage/hdd/disk3/disk.img -rwxr-xr-x. 1 1000000000 1000000000 2.0G Feb 18 11:38 /mnt/local-storage/hdd/disk3/disk.img
Could you use a qcow2 (not compressed) to verify. By using a compressed qcow2 you are following a different path, that has a known issue (same as upload, you will require a PVC of 2* actual size + virtual size) to make it work.
@Alexander, you're right, now I tried a non-compressed qcow2 and it shows: W0219 16:22:11.788895 1 dataStream.go:349] Available space less than requested size, resizing image to available space 12710875136. I0219 16:22:11.789187 1 dataStream.go:355] Expanding image size to: 12710875136 and the disk.img size is indeed 12G: sh-4.2# ls -lh /var/lib/heketi/mounts/vg_027d04e526e744e0a3ea0d98eae43338/brick_b814fa20e29eeabb6b754248ede1f50a/brick/disk.img -rw-r--r--. 2 1000000000 2000 12G Feb 19 16:22 /var/lib/heketi/mounts/vg_027d04e526e744e0a3ea0d98eae43338/brick_b814fa20e29eeabb6b754248ede1f50a/brick/disk.img Moving to back to verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:0417