Red Hat Bugzilla – Bug 1450908
useless warning displays when clone a guest using a real disk partition as backend storage
Last modified: 2018-04-10 07:42:12 EDT
Description of problem: useless warning displays when clone a guest using a real disk partition as backend storage Version-Release number of selected component (if applicable): virt-manager-1.4.1-3.el7.noarch virt-install-1.4.1-3.el7.noarch libvirt-3.2.0-4.el7.x86_64 qemu-kvm-rhev-2.9.0-3.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1. Prepare a health disk with disk: # virsh dumpxml import ... <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/sgiotest.img'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk> ... # qemu-img info /var/lib/libvirt/images/sgiotest.img image: /var/lib/libvirt/images/sgiotest.img file format: raw virtual size: 6.0G (6442450944 bytes) disk size: 4.4G 2. Then clone a guest using a real disk partition as backend storage. # qemu-img info /dev/sdb2 image: /dev/sdb2 file format: raw virtual size: 20G (21474836480 bytes) disk size: 0 # virt-clone -o import -n import-clone --file=/dev/sdb2 --check all=off WARNING This will overwrite the existing path '/dev/sdb2' WARNING The filesystem will not have enough free space to fully allocate the sparse file when the guest is running. 6144 M requested > 3866 M available Cloning sgiotest.img | 6.0 GB 00:02:17 Clone 'import-clone' created successfully. Actual results: As description, useless warning displays during cloning. WARNING The filesystem will not have enough free space to fully allocate the sparse file when the guest is running. 6144 M requested > 3866 M available Expected results: Fix it. Additional info:
Upstream patch posted: https://www.redhat.com/archives/virt-tools-list/2017-October/msg00012.html
Upstream commit: commit 6e6f59e7abfd85b2a53554b7d091e553585e85c8 Author: Pavel Hrdina <phrdina@redhat.com> Date: Tue Oct 3 16:59:13 2017 +0200 diskbackend: get a proper size of existing block device while cloning
Try to verify this bug with new build: virt-manager-1.4.3-2.el7.noarch virt-install-1.4.3-2.el7.noarch 1. Prepare a health shutoff guest. ... <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/rhel7.4-clone.qcow2'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk> ... Then check disk size. # qemu-img info /var/lib/libvirt/images/rhel7.4-clone.qcow2 image: /var/lib/libvirt/images/rhel7.4-clone.qcow2 file format: qcow2 virtual size: 9.0G (9663676416 bytes) disk size: 3.9G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: true refcount bits: 16 corrupt: false 2. Clone a guest using a real disk partition as backend storage. scenairo-1 full_block_disk_size < 9G # qemu-img info /dev/sdb image: /dev/sdb file format: raw virtual size: 7.5G (8006926336 bytes) disk size: 0 # virt-clone -o rhel7.4 -n 9-clone --file=/dev/sdb --check all=off WARNING This will overwrite the existing path '/dev/disk/by-path/pci-0000:00:1d.0-usb-0:1.3:1.0-scsi-0:0:0:0' WARNING The filesystem will not have enough free space to fully allocate the sparse file when the guest is running. 9216 M requested > 7636 M available Cloning rhel7.4-clone.qcow2 | 9.0 GB 00:03:39 Clone '9-clone' created successfully. # qemu-img info /dev/sdb image: /dev/sdb file format: qcow2 virtual size: 9.0G (9663676416 bytes) disk size: 0 cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: true refcount bits: 16 corrupt: false Result: i. Cloning begins with calculated size '9216 M requested > 7636 M' in warning, it is as expected. ii. New guest can boot up. Scenario-2: full_block_disk_size > 9G # qemu-img info /dev/sdc image: /dev/sdc file format: raw virtual size: 29G (31004295168 bytes) disk size: 0 # virt-clone -o rhel7.4 -n 16clone --file=/dev/sdc --check all=off WARNING This will overwrite the existing path '/dev/sdc' Cloning rhel7.4-clone.qcow2 # qemu-img info /dev/sdc image: /dev/sdc file format: qcow2 virtual size: 9.0G (9663676416 bytes) disk size: 0 cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: true refcount bits: 16 corrupt: false Result: i. Cloning begins without calculated size warning, it is as expected. ii. New guest can boot up. Scenario-3: block disk has 3 partitions, setting block_disk_size <3.9G (disk size of original guest) # lsblk |grep sdc sdc 8:32 1 28.9G 0 disk ├─sdc1 8:33 1 2G 0 part ├─sdc2 8:34 1 1G 0 part └─sdc3 8:35 1 9G 0 part Then select sdc1 as backend. # virt-clone -o rhel7.4 -n sdc1 --file=/dev/sdc1 --check all=off WARNING This will overwrite the existing path '/dev/sdc1' WARNING The filesystem will not have enough free space to fully allocate the sparse file when the guest is running. 9216 M requested > 2048 M available Cloning rhel7.4-clone.qcow2 | 2.0 GB 00:00:12 ... Clone 'sdc1' created successfully. # qemu-img info /dev/sdc1 image: /dev/sdc1 file format: qcow2 virtual size: 9.0G (9663676416 bytes) disk size: 0 cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: true refcount bits: 16 corrupt: false Result: i. Cloning begins with calculated size '9216 M requested > 2048 M' in warning, it is as expected. But i'm concerned that whether the data in sdc2 or other partition will be overwritten since sdc1 is not big enough. ii. New guest failed to boot into os(Please see Screenshot), so while cloning a guest, will virt-clone commands calculated size it at least needs? @Pavel, i also have another question about Scenario-1 and Scenario-3, after cloning, the virtual disk size is 9.0G, exceeding to itself size(7.5G of sdb and 2.0G of sdc1), is it a bug, thanks.
Created attachment 1359289 [details] Screenshot for guest 'sdc1'
Hi, for scenario 1 and 3 user have disabled all the checks which means force clone the guest even if it might not work and might not fit into the destination. If something doesn't work it's not a bug, virt-clone does exactly what user asked it to do. In scenario 3 the guest didn't boot probably because the disk size (the size, that is actually used) is 3.9G but the destination partition is only 2G so virt-clone was not able to clone the whole disk.
To make verification steps and results more clear based on Comment 6 and Comment 8. Try to verify this bug with new build: virt-manager-1.4.3-2.el7.noarch virt-install-1.4.3-2.el7.noarch 1. Prepare a health shutoff guest. ... <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/rhel7.4-clone.qcow2'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk> ... Then check disk size. # qemu-img info /var/lib/libvirt/images/rhel7.4-clone.qcow2 image: /var/lib/libvirt/images/rhel7.4-clone.qcow2 file format: qcow2 virtual size: 9.0G (9663676416 bytes) disk size: 3.9G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: true refcount bits: 16 corrupt: false Result: The actual size guest used is 3.9G. 2. Clone a guest using a real disk partition as backend storage. scenairo-1 full_block_disk_size < 9G # qemu-img info /dev/sdb image: /dev/sdb file format: raw virtual size: 7.5G (8006926336 bytes) disk size: 0 # virt-clone -o rhel7.4 -n 9-clone --file=/dev/sdb --check all=off WARNING This will overwrite the existing path '/dev/disk/by-path/pci-0000:00:1d.0-usb-0:1.3:1.0-scsi-0:0:0:0' WARNING The filesystem will not have enough free space to fully allocate the sparse file when the guest is running. 9216 M requested > 7636 M available Cloning rhel7.4-clone.qcow2 | 9.0 GB 00:03:39 Clone '9-clone' created successfully. # qemu-img info /dev/sdb image: /dev/sdb file format: qcow2 virtual size: 9.0G (9663676416 bytes) disk size: 0 cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: true refcount bits: 16 corrupt: false Result: i. Cloning begins with calculated size '9216 M requested > 7636 M' in warning, it is as expected. ii. New guest can boot up. Scenario-2: full_block_disk_size > 9G # qemu-img info /dev/sdc image: /dev/sdc file format: raw virtual size: 29G (31004295168 bytes) disk size: 0 # virt-clone -o rhel7.4 -n 16clone --file=/dev/sdc --check all=off WARNING This will overwrite the existing path '/dev/sdc' Cloning rhel7.4-clone.qcow2 # qemu-img info /dev/sdc image: /dev/sdc file format: qcow2 virtual size: 9.0G (9663676416 bytes) disk size: 0 cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: true refcount bits: 16 corrupt: false Result: i. Cloning begins without calculated size warning, it is as expected. ii. New guest can boot up. Scenario-3: block disk has 3 partitions, setting block_disk_size <3.9G (disk size of original guest) # lsblk |grep sdc sdc 8:32 1 28.9G 0 disk ├─sdc1 8:33 1 2G 0 part ├─sdc2 8:34 1 1G 0 part └─sdc3 8:35 1 9G 0 part Then select sdc1 as backend. # virt-clone -o rhel7.4 -n sdc1 --file=/dev/sdc1 --check all=off WARNING This will overwrite the existing path '/dev/sdc1' WARNING The filesystem will not have enough free space to fully allocate the sparse file when the guest is running. 9216 M requested > 2048 M available Cloning rhel7.4-clone.qcow2 | 2.0 GB 00:00:12 ... Clone 'sdc1' created successfully. # qemu-img info /dev/sdc1 image: /dev/sdc1 file format: qcow2 virtual size: 9.0G (9663676416 bytes) disk size: 0 cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: true refcount bits: 16 corrupt: false Result: i. Cloning begins with calculated size '9216 M requested > 2048 M' in warning, it is as expected. ii. New guest failed to boot up, it is as expected, for the disk size (the size, that is actually used) is 3.9G but the destination partition is only 2G so virt-clone was not able to clone the whole disk. Based on above three testing scenarios, move this bug from ON_QA to VERIFIED. And also thanks for Pavel's help.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:0726