RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 651803 - virDomainGetBlockInfo does not refresh physical size of disk image after its lvextend
Summary: virDomainGetBlockInfo does not refresh physical size of disk image after its ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm
Version: 6.0
Hardware: Unspecified
OS: Unspecified
low
urgent
Target Milestone: rc
: ---
Assignee: Dan Kenigsberg
QA Contact: yeylon@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 651335 660162
TreeView+ depends on / blocked
 
Reported: 2010-11-10 10:56 UTC by Dan Kenigsberg
Modified: 2016-04-18 06:35 UTC (History)
14 users (show)

Fixed In Version: vdsm-4.9-43.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-08-19 15:20:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Dan Kenigsberg 2010-11-10 10:56:56 UTC
Description of problem:
libvirt does not seem to notice when the underlying disk image is extended by lvextend. We ended up with domblkinfo reporting 4G while true size was 187G:

# virsh domblkinfo 20c0d64c-d706-439a-a39b-9ced382004f0
/rhev/data-center/2c765f7a-2cf1-4025-a107-2c76d44af6a9/5235507a-a6c7-4ec1-a34e-4f7f754a1230/images/4a79ca7d-371d-4a9b-8f8d-a0010b005f26/8e714d9f-b83c-47b5-9d70-8dbae7f8ea13
Capacity:       16106127360
Allocation:     11483217408
Physical:       4294967296

[root@vm-18-16 ~]# lvs |grep 8e714d9f-b83c-47b5-9d70-8dbae7f8ea13 
  8e714d9f-b83c-47b5-9d70-8dbae7f8ea13 5235507a-a6c7-4ec1-a34e-4f7f754a1230
-wi-ao 187.00g


Version-Release number of selected component (if applicable):
libvirt-0.8.1-27.el6.x86_64

Comment 1 Daniel Berrangé 2010-11-10 11:06:10 UTC
Can you tell me what 'qemu-img info /big/long/path' shows ?

Comment 2 Dan Kenigsberg 2010-11-10 12:08:19 UTC
# qemu-img info -f qcow2 /rhev/data-center/2c765f7a-2cf1-4025-a107-2c76d44af6a9/5235507a-a6c7-4ec1-a34e-4f7f754a1230/images/4a79ca7d-371d-4a9b-8f8d-a0010b005f26/8e714d9f-b83c-47b5-9d70-8dbae7f8ea13
image: /rhev/data-center/2c765f7a-2cf1-4025-a107-2c76d44af6a9/5235507a-a6c7-4ec1-a34e-4f7f754a1230/images/4a79ca7d-371d-4a9b-8f8d-a0010b005f26/8e714d9f-b83c-47b5-9d70-8dbae7f8ea13
file format: qcow2
virtual size: 15G (16106127360 bytes)
disk size: 0
cluster_size: 65536
backing file: ../4a79ca7d-371d-4a9b-8f8d-a0010b005f26/31e65ec0-6ccf-476e-b3cd-5b07a55809ec (actual path: /rhev/data-center/2c765f7a-2cf1-4025-a107-2c76d44af6a9/5235507a-a6c7-4ec1-a34e-4f7f754a1230/images/4a79ca7d-371d-4a9b-8f8d-a0010b005f26/../4a79ca7d-371d-4a9b-8f8d-a0010b005f26/31e65ec0-6ccf-476e-b3cd-5b07a55809ec)


# qemu-img info /rhev/data-center/2c765f7a-2cf1-4025-a107-2c76d44af6a9/5235507a-a6c7-4ec1-a34e-4f7f754a1230/images/4a79ca7d-371d-4a9b-8f8d-a0010b005f26/../4a79ca7d-371d-4a9b-8f8d-a0010b005f26/31e65ec0-6ccf-476e-b3cd-5b07a55809ec
image: /rhev/data-center/2c765f7a-2cf1-4025-a107-2c76d44af6a9/5235507a-a6c7-4ec1-a34e-4f7f754a1230/images/4a79ca7d-371d-4a9b-8f8d-a0010b005f26/../4a79ca7d-371d-4a9b-8f8d-a0010b005f26/31e65ec0-6ccf-476e-b3cd-5b07a55809ec
file format: raw
virtual size: 15G (16106127360 bytes)
disk size: 0

Comment 3 Daniel Berrangé 2010-11-10 12:41:22 UTC
Capacity:       16106127360
Allocation:     11483217408
Physical:       4294967296

So capacity matches the qcow2 size as per qemu-img info, so that's correct.

Physical is approx 4GB, which i presume is roughly the amount of data that has been written into the qcow2 file.

Allocation is 10 GB which should be the size of the LVM volume. libvirt determines this by opening the file, and running seek(fd, 0, SEEK_END). The return value of seek is the new position, which should be matching the file/device size.

What does this say ?

perl -we 'open FILE, $ARGV[0]; seek FILE,0,2; print ((tell FILE), "\n");' /path/to/big/device.

Comment 4 Dan Kenigsberg 2011-01-11 18:21:24 UTC
Indeed,

# perl -we 'open FILE, $ARGV[0]; seek FILE,0,2; print ((tell FILE), "\n");' /rhev/data-center/8f343126-1ef5-4939-89c7-d0d69468298d/a21af405-3101-432f-9b80-bf28a8acf168/images/93074596-7266-465e-8e72-9f78860e4eb8/d2c85070-4ee6-448e-a93d-7a5e1335103a
1073741824

however,

# lvs |grep d2c85070-4ee6-448e-a93d-7a5e1335103a
  d2c85070-4ee6-448e-a93d-7a5e1335103a a21af405-3101-432f-9b80-bf28a8acf168 -wi-ao   2.06g

The domain is paused; when resumed, it paused again due to

remoteRelayDomainEventIOErrorReason:274 : Relaying domain io error fc-win7x86-2 3 /rhev/data-center/8f3
43126-1ef5-4939-89c7-d0d69468298d/a21af405-3101-432f-9b80-bf28a8acf168/images/93074596-7266-465e-8e72-9f78860e4eb8/d2c85070-4
ee6-448e-a93d-7a5e1335103a ide0-0-0 1 enospc

The interesting thing is that qemu is reporting much higher "wr_highest_offset": 2096758272.

qemuMonitorJSONIOProcessLine:115 : Line [{"return": [{"device": "drive-ide0-0-0", "parent": {"stats": {"wr_highest_offset": 2096758272, "wr_bytes": 2466782720, "wr_operations": 75607, "rd_bytes": 786583552, "rd_operations": 24030}}, "stats": {"wr_highest_offset": 10808802816, "wr_bytes": 2466782720, "wr_operations": 60029, "rd_bytes": 4941649408, "rd_operations": 151075}}, {"device": "drive-ide0-1-0", "stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, {"device": "drive-fdc0-0-0", "parent": {"stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"wr_highest_offset": 0, "wr_bytes": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}]}]

It seems that qemu writes to qcow does not update the SEEK_END of the host block device.

I am raising the severity of this bug, as it blocks me from circumventing bug https://bugzilla.redhat.com/show_bug.cgi?id=659301#c16 in my bug 660162.

Comment 5 Daniel Berrangé 2011-01-11 18:32:36 UTC
> # perl -we 'open FILE, $ARGV[0]; seek FILE,0,2; print ((tell FILE), "\n");'
> /rhev/data-center/8f343126-1ef5-4939-89c7-d0d69468298d/a21af405-3101-432f-9b80-bf28a8acf168/images/93074596-7266-465e-8e72-9f78860e4eb8/d2c85070-4ee6-448e-a93d-7a5e1335103a
> 1073741824
>
> however,
>
> # lvs |grep d2c85070-4ee6-448e-a93d-7a5e1335103a
>  d2c85070-4ee6-448e-a93d-7a5e1335103a a21af405-3101-432f-9b80-bf28a8acf168
> -wi-ao   2.06g

This is really bizarre, LVM is reporting the file is 2 GB, in size, but SEEK_END stops at 1 GB. The QEMU wr_highest_offset is approx 2 GB too, which is matching what LVM reports.

I don't really understand how SEEK_END can ever disagree with LVM about what the volume size is though. libvirt definitely relies on SEEK_END being correct, so whatever is making SEEK_END not work, is in turn causing libvirt to report confusing results.

Comment 6 Daniel Berrangé 2011-01-11 19:19:17 UTC
Even kernel thinks the device is still 1 GB in size

# blockdev  --getsize64 /dev/a21af405-3101-432f-9b80-bf28a8acf168/d2c85070-4ee6-448e-a93d-7a5e1335103a
1073741824


So, somehow the kernel's view of the volume size, and LVM's view of the volume size are different. This in turn causes libvirt to report wrong data.

Comment 7 Daniel Berrangé 2011-01-11 19:21:58 UTC
After doing a further lvextend + 1M on the virt node, the kernel + lvm view of the size is back in sync.  This combination of factors is suggesting that there is a bug in LVM where a resize performed on the SPM does not correctly propagate to the node running the VM(s).

Comment 8 Jiri Denemark 2011-01-18 13:59:31 UTC
This bug is filed against libvirt but has vdsm-4.9-43.el6 in Fixed In Version field and ON_QA. Igor, was the change you made intentional?

Comment 9 Daniel Berrangé 2011-01-18 14:23:27 UTC
This isn't a libvirt bug, it should have been moved to vdsm.

Comment 10 Haim 2011-02-06 14:51:03 UTC
lvextend now works system wide (vdsm-4.9-47.el6.x86_64, qemu-kvm-0.12.1.2-2.138.el6.x86_64, libvirt-0.8.7-5.el6.x86_64). 

scenario: 

1) run new domain (live CD) from rhevm with thin-provisioned disk (cow) of 20G
2) create file system over new disk, and mount it under /mnt
3) using dd, fill disk with zeros 
4) lv is extended (with additional 1G).


Note You need to log in before you can comment on or make changes to this bug.