Bug 1128855 - Take advantage of libvirt blockInfo support on root_squash NFS
Summary: Take advantage of libvirt blockInfo support on root_squash NFS
Alias: None
Product: vdsm
Classification: oVirt
Component: General
Version: ---
Hardware: Unspecified
OS: Unspecified
Target Milestone: ovirt-4.0.0-rc
: 4.18.0
Assignee: Tal Nisan
QA Contact: Carlos Mestre González
Depends On: 963881
TreeView+ depends on / blocked
Reported: 2014-08-11 16:06 UTC by Federico Simoncelli
Modified: 2016-08-01 12:29 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2016-08-01 12:29:17 UTC
oVirt Team: Storage
rule-engine: ovirt-4.0.0+
rule-engine: planning_ack+
tnisan: devel_ack+
rule-engine: testing_ack+

Attachments (Terms of Use)
proposed patch to fix the issue (2.20 KB, patch)
2014-08-12 20:32 UTC, Adam Litke
no flags Details | Diff

System ID Private Priority Status Summary Last Updated
oVirt gerrit 55399 0 master MERGED virt: Use libvirt to get drive size 2016-05-26 11:36:15 UTC

Description Federico Simoncelli 2014-08-11 16:06:34 UTC
Description of problem:
VDSM should take advantage of the fix for bug 963881.

According to vdsm/virt/vm.py:4426:

    def _diskSizeExtendCow(self, drive, newSizeBytes):
        # Apparently this is what libvirt would do anyway, except that
        # it would fail on NFS when root_squash is enabled, see BZ#963881
        # Patches have been submitted to avoid this behavior, the virtual
        # and apparent sizes will be returned by the qemu process and
        # through the libvirt blockInfo call.
        currentSize = qemuimg.info(drive.path, "qcow2")['virtualsize']

Version-Release number of selected component (if applicable):

Comment 1 Adam Litke 2014-08-12 19:37:58 UTC
The fix is verified in libvirt-0.10.2-31.el6.x86_64 but vdsm still allows libvirt-0.10.2-29.el6_5.7.x86_64 for RHEL6 hosts (which is broken).  Are we willing to bump the requirement for RHEL6 up to -31?  If not, then we cannot switch to the new method without breaking RHEL6 (unless we add code to try both methods: first libvirt, then qemu-img).

Comment 2 Allon Mureinik 2014-08-12 19:49:53 UTC
Just to clarify - the title was a typo - it was supposed to be "blocked", not "blocker".

IMHO, as long as -31 is available on CentOS, sure - let's bump the requirement.
If not, let's wait for it, and then bump the requirement.

Comment 3 Adam Litke 2014-08-12 20:29:57 UTC
As of 8/12/2014 the current libvirt version (in 6.5) is libvirt-0.10.2-29.el6.x86_64.rpm.  Keeping it targeted for 3.6.0.

Comment 4 Adam Litke 2014-08-12 20:32:09 UTC
Created attachment 926207 [details]
proposed patch to fix the issue

Here is a quick untested patch to switch to using the libvirt API.  Putting this here for reference once we get back to working this bug.

Comment 5 Allon Mureinik 2015-04-19 15:41:54 UTC
For 3.6.0 we won't support EL6 anymore, so this bug is unblocked.

Comment 6 Red Hat Bugzilla Rules Engine 2015-10-19 10:59:21 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 7 Yaniv Lavi 2015-10-29 12:06:24 UTC
In oVirt testing is done on single stream by default. Therefore I'm removing the 4.0 flag. If you think this bug must be tested in 4.0 as well, please re-add the flag. Please note we might not have testing resources to handle the 4.0 clone.

Comment 8 Sandro Bonazzola 2016-05-02 09:57:50 UTC
Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA.

Comment 9 Yaniv Kaul 2016-05-08 10:58:45 UTC
Adam - is this going to make it to 4.0?

Comment 10 Yaniv Lavi 2016-05-23 13:18:53 UTC
oVirt 4.0 beta has been released, moving to RC milestone.

Comment 11 Yaniv Lavi 2016-05-23 13:26:12 UTC
oVirt 4.0 beta has been released, moving to RC milestone.

Comment 12 Adam Litke 2016-05-25 14:09:50 UTC
Yes, this will make 4.0.  I'll be posting a patch in a few minutes.

Comment 13 Carlos Mestre González 2016-07-14 12:04:41 UTC
Hi Tal,

Just to confirm, one way to verify this change is to have a NFS domain with root_squash and create a vm with a disk, start it,  and make sure the disk information is correct? or do I have to check with virsh -r domblkinfo [domain] vda.

Comment 14 Carlos Mestre González 2016-07-14 12:43:53 UTC
# virsh --readonly -r domblkinfo test_vm_root_squah vda
Capacity:       10737418240
Allocation:     2833460736
Physical:       3021410304

from vdsm: 
	capacity = 10737418240
	format = COW
	image = 8d4a4fbb-bef3-4aeb-810a-22877ea183d1
	uuid = 25641c06-be62-4139-b53e-9ead20eb1284
	disktype = 2
	legality = LEGAL
	mtime = 0
	apparentsize = 3021406208
	truesize = 3021410304

Comment 15 Adam Litke 2016-07-14 13:42:57 UTC
(In reply to Carlos Mestre González from comment #13)
> Hi Tal,
> Just to confirm, one way to verify this change is to have a NFS domain with
> root_squash and create a vm with a disk, start it,  and make sure the disk
> information is correct? or do I have to check with virsh -r domblkinfo
> [domain] vda.

Hi Mestre,

While, the above command will verify that libvirt is fixed, it is not enough to verify this bug.  This bug is about how we check the volume size in vdsm when doing a live disk resize.  To verify the bug please perform the following steps:

1. Create a VM with a disk on an NFS domain where root_squash is enabled.
2. Create a snapshot including the disk from step 1.
3. Start the VM.
4. Extend the disk from the admin panel:
   - Click the VM
   - Click the disks tab in the lower panel
   - Right click the disk and choose 'Edit'
   - Enter '1' in the Extend size by... box
   - Click OK
5. Wait for the operation to complete.
6. Use virsh -r domblkinfo to verify the size has been increased to the proper value.

Comment 16 Carlos Mestre González 2016-07-14 14:24:49 UTC
# virsh --readonly -r domblkinfo test_vm_squash vda
Capacity:       5368709120
Allocation:     0
Physical:       200704 (snapshot)

===> increased size to 10 GB
# virsh --readonly -r domblkinfo test_vm_squash vda
Capacity:       10737418240
Allocation:     262144
Physical:       204800

# vdsClient -s 0 getVolumeInfo 14eb5336-cd52-439b-8cd1-bfe8497596c0  a7f83cfc-fab1-4f53-98e5-b7ee1a3c0094 4ef80449-0335-4d37-9ec7-d852dec240d8 0fbb312a-95eb-4c6c-9bd4-7f6110848d38
	status = OK
	domain = 14eb5336-cd52-439b-8cd1-bfe8497596c0
	capacity = 10737418240
	voltype = LEAF
	description = 
	parent = 63982858-ca79-4e4b-8613-dbddec14de27
	format = COW
	image = 4ef80449-0335-4d37-9ec7-d852dec240d8
	uuid = 0fbb312a-95eb-4c6c-9bd4-7f6110848d38
	disktype = 2
	legality = LEGAL
	mtime = 0
	apparentsize = 262656
	truesize = 204800
	type = SPARSE
	children = []
	pool = 
	ctime = 1468505044

version: vdsm-4.18.6-1.el7ev.x86_64

Note You need to log in before you can comment on or make changes to this bug.