Description of problem: VDSM should take advantage of the fix for bug 963881. According to vdsm/virt/vm.py:4426: def _diskSizeExtendCow(self, drive, newSizeBytes): # Apparently this is what libvirt would do anyway, except that # it would fail on NFS when root_squash is enabled, see BZ#963881 # Patches have been submitted to avoid this behavior, the virtual # and apparent sizes will be returned by the qemu process and # through the libvirt blockInfo call. currentSize = qemuimg.info(drive.path, "qcow2")['virtualsize'] Version-Release number of selected component (if applicable): vdsm-4.14.13-1.el6ev
The fix is verified in libvirt-0.10.2-31.el6.x86_64 but vdsm still allows libvirt-0.10.2-29.el6_5.7.x86_64 for RHEL6 hosts (which is broken). Are we willing to bump the requirement for RHEL6 up to -31? If not, then we cannot switch to the new method without breaking RHEL6 (unless we add code to try both methods: first libvirt, then qemu-img).
Just to clarify - the title was a typo - it was supposed to be "blocked", not "blocker". IMHO, as long as -31 is available on CentOS, sure - let's bump the requirement. If not, let's wait for it, and then bump the requirement.
As of 8/12/2014 the current libvirt version (in 6.5) is libvirt-0.10.2-29.el6.x86_64.rpm. Keeping it targeted for 3.6.0.
Created attachment 926207 [details] proposed patch to fix the issue Here is a quick untested patch to switch to using the libvirt API. Putting this here for reference once we get back to working this bug.
For 3.6.0 we won't support EL6 anymore, so this bug is unblocked.
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
In oVirt testing is done on single stream by default. Therefore I'm removing the 4.0 flag. If you think this bug must be tested in 4.0 as well, please re-add the flag. Please note we might not have testing resources to handle the 4.0 clone.
Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA.
Adam - is this going to make it to 4.0?
oVirt 4.0 beta has been released, moving to RC milestone.
Yes, this will make 4.0. I'll be posting a patch in a few minutes.
Hi Tal, Just to confirm, one way to verify this change is to have a NFS domain with root_squash and create a vm with a disk, start it, and make sure the disk information is correct? or do I have to check with virsh -r domblkinfo [domain] vda.
# virsh --readonly -r domblkinfo test_vm_root_squah vda Capacity: 10737418240 Allocation: 2833460736 Physical: 3021410304 from vdsm: capacity = 10737418240 format = COW image = 8d4a4fbb-bef3-4aeb-810a-22877ea183d1 uuid = 25641c06-be62-4139-b53e-9ead20eb1284 disktype = 2 legality = LEGAL mtime = 0 apparentsize = 3021406208 truesize = 3021410304
(In reply to Carlos Mestre González from comment #13) > Hi Tal, > > Just to confirm, one way to verify this change is to have a NFS domain with > root_squash and create a vm with a disk, start it, and make sure the disk > information is correct? or do I have to check with virsh -r domblkinfo > [domain] vda. Hi Mestre, While, the above command will verify that libvirt is fixed, it is not enough to verify this bug. This bug is about how we check the volume size in vdsm when doing a live disk resize. To verify the bug please perform the following steps: 1. Create a VM with a disk on an NFS domain where root_squash is enabled. 2. Create a snapshot including the disk from step 1. 3. Start the VM. 4. Extend the disk from the admin panel: - Click the VM - Click the disks tab in the lower panel - Right click the disk and choose 'Edit' - Enter '1' in the Extend size by... box - Click OK 5. Wait for the operation to complete. 6. Use virsh -r domblkinfo to verify the size has been increased to the proper value.
# virsh --readonly -r domblkinfo test_vm_squash vda Capacity: 5368709120 Allocation: 0 Physical: 200704 (snapshot) ===> increased size to 10 GB # virsh --readonly -r domblkinfo test_vm_squash vda Capacity: 10737418240 Allocation: 262144 Physical: 204800 # vdsClient -s 0 getVolumeInfo 14eb5336-cd52-439b-8cd1-bfe8497596c0 a7f83cfc-fab1-4f53-98e5-b7ee1a3c0094 4ef80449-0335-4d37-9ec7-d852dec240d8 0fbb312a-95eb-4c6c-9bd4-7f6110848d38 status = OK domain = 14eb5336-cd52-439b-8cd1-bfe8497596c0 capacity = 10737418240 voltype = LEAF description = parent = 63982858-ca79-4e4b-8613-dbddec14de27 format = COW image = 4ef80449-0335-4d37-9ec7-d852dec240d8 uuid = 0fbb312a-95eb-4c6c-9bd4-7f6110848d38 disktype = 2 legality = LEGAL mtime = 0 apparentsize = 262656 truesize = 204800 type = SPARSE children = [] pool = ctime = 1468505044 version: vdsm-4.18.6-1.el7ev.x86_64