Description of problem: virt-v2v/libguestfs will use /var/tmp by default when importing VMs to RHV. However some guests might be excessively large and won't fit into the RHEL/RHV-H filesystem at all. The Host might not even have enough disk capacity for perform some conversions. Our documentation[1] only states that if one wants to deploy HE, the /var needs to be 60G or bigger. We state a minimum requirement of 15G. Since there is no way to magically predict a size that will fit all possible VMs, it would be sane to at least allow customizing the directory that will be used for these v2v volumes, so that the administrator can work around hypervisor storage limitations (attaching NFS?). Note it is desirable to have a separate directory for this, as temporarily mounting a NFS share in /var/tmp may make some temporary files from other processes go away when the conversion finished and the administrator removes the share providing the extra space(umount), if they are not open in the exact time umount is run. An environment variable can be set while calling virt-v2v to customize this location. See: https://www.redhat.com/archives/libguestfs/2012-November/msg00018.html https://github.com/oVirt/vdsm/blob/master/lib/vdsm/v2v.py#L514 Some higher level orchestration to configure this, check space etc would be even better. Version-Release number of selected component (if applicable): vdsm-4.19.10.1-1.el7ev.x86_64 How reproducible: 100% Steps to Reproduce: 1. Import a big Guest while having a small /var/tmp Actual results: /var/tmp runs out of space Additional info: [1] Item 2.2.3 https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/installation_guide/sect-hypervisor_requirements
What was the size of the VM when you run out of space, how much you had in that /var/tmp, where did you import from? It shouldn't need more than few tens MBs
Hi Michal, (In reply to Michal Skrivanek from comment #1) > What was the size of the VM when you run out of space Customer tried to import a 200GB VM and /var/tmp had 15GB (default on RHV-H). It failed. With smaller VMs he succeeds. > how much you had in that /var/tmp, where did you import from? My /var/ has 177.2GB free. I filed it /var/ with a 176.5GB file, leaving 250MB free in that FS, which /var/tmp is inside. Then I tried to import a VM from a 300MB OVA, and it failed. Then I removed the big file and tried again to check how much space it would use. First, it needed 431MB for a supermin5 command: /usr/bin/supermin5 --build --verbose --if-newer --lock /var/tmp/.guestfs-36/lock --copy-kernel -f ext2 --host-cpu x86_64 /usr/lib64/guestfs/supermin.d -o /var/tmp/.guestfs-36/appliance.d 431M /var/tmp/ After the supermin5, it ran this, to actually perform the conversion AFAIK /usr/libexec/qemu-kvm -global virtio-blk-pci.scsi=off -nodefconfig -enable-fips -nodefaults -display none -machine accel=kvm:tcg -cpu host -m 800 -no-reboot -rtc driftfix=slew -no-hpet -global kvm-pit.lost_tick_policy=discard -kernel /var/tmp/.guestfs-36/appliance.d/kernel -initrd /var/tmp/.guestfs-36/appliance.d/initrd -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0 -device virtio-scsi-pci,id=scsi -drive file=/var/tmp/v2vovlcfaf03.qcow2,cache=unsafe,discard=unmap,format=qcow2,copy-on-read=on,id=hd0,if=none -device scsi-hd,drive=hd0 -drive file=/var/tmp/.guestfs-36/appliance.d/root,snapshot=on,id=appliance,cache=unsafe,if=none,format=raw -device scsi-hd,drive=appliance -device virtio-serial-pci -serial stdio -device sga -chardev socket,path=/tmp/libguestfs0uqtiQ/guestfsd.sock,id=channel0 -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 -netdev user,id=usernet,net=169.254.0.0/16 -device virtio-net-pci,netdev=usernet -append panic=1 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 guestfs_network=1 TERM=linux guestfs_identifier=v2v This used a qcow2 file in /var/tmp/v2vovlcfaf03.qcow2 I saw this file growing while the qemu-kvm process was running, but I don't have it's final value as my monitoring script stopped. But I am not sure if this is important anymore. As Bimal pointed out, there is this upstream BZ1371875 which seem closely related and you seem to have worked on this with several patches into several components. The description seems quite similar. Aren't those new versions just around the corner fixing this? For example, I don't see these in RHEL 7.3 (v1.32), only in 7.4 (v1.36). That BZ also talks about newer libvirt and qemu, which we don't have yet in 7.3. https://www.redhat.com/archives/libguestfs/2017-February/msg00274.html If you think this is different from that BZ, please let me know what data you need and I will get it for you. Otherwise feel free to close this as DUP of the other BZ in case those patches will improve this.
Hi Germano, yes, as noted in https://bugzilla.redhat.com/show_bug.cgi?id=1371875#c3 it is all finished upstream already. Therefore I do not think this RFE is needed anymore. It still uses the /var/tmp location but the size is significantly reduced. This should be available in existing RHVs once 7.4 is released. If you want to experiment with 7.4 beta or upstream, just beware of bug 1444426 before doing so.
can you share v2v logs for both 7.3 and 7.4?
Created attachment 1346572 [details] RHEL 7.4 import log
Created attachment 1346573 [details] RHEL 7.3 import log
Reassigned, please see https://bugzilla.redhat.com/show_bug.cgi?id=1449869#c17
the rhel 7.3 virt-v2v clearly dud tar xf at the beginning, and 7.4 version did not. It is likely you are not observing the sizes correctly. The log contains available space after the untar step...but not the initial one, can you check prior to launching each what's the free space and compare it with what's logged at the beginning of import logs?
moving back to ON_QA to re-verify
INFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [No relevant external trackers attached] For more info please contact: rhv-devops
Verification builds: RHEL 7.5: vdsm-4.20.19-1.el7ev.x86_64 qemu-kvm-rhev-2.10.0-19.el7.x86_64 libvirt-client-3.9.0-13.el7.x86_64 virt-v2v-1.36.10-6.el7.x86_64 RHEL 7.3: vdsm-4.19.21-1.el7ev.x86_64 qemu-kvm-rhev-2.6.0-28.el7_3.9.x86_64 libvirt-client-2.0.0-10.el7_3.9.x86_64 virt-v2v-1.32.7-3.el7_3.3.x86_64 Verification scenario: 1. Observe /var/tmp/ disk usage differences during VMware OVA import between RHEL 7.3 and RHEL 7.5. Results example: - win7.ova (size of 4.1G) import: RHEL 7.3 disk usage: 4.2G RHEL 7.5 disk usage: 425M - rhel7.ova (size of 691M) import: RHEL 7.3 disk usage: 1.1G RHEL 7.5 disk usage: 492M
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:1489
BZ<2>Jira Resync