Bug 1250982
| Summary: | libvirt reports physical=0 for COW2 volumes on block storage | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Francesco Romani <fromani> | |
| Component: | libvirt | Assignee: | Peter Krempa <pkrempa> | |
| Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> | |
| Severity: | urgent | Docs Contact: | ||
| Priority: | urgent | |||
| Version: | 7.2 | CC: | amureini, dyuan, frolland, fromani, jdenemar, nsoffer, pkrempa, rbalakri, xuzhang, yisun | |
| Target Milestone: | rc | Keywords: | Regression | |
| Target Release: | --- | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | libvirt-1.2.17-5.el7 | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1251008 1253754 (view as bug list) | Environment: | ||
| Last Closed: | 2015-11-19 06:50:02 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1154205, 1251008, 1253754 | |||
This breaks RHEV thin provisioning on block storage, and a regression from earlier releases. Can you check with permissive mode? If it work with permissive mode, can get the related AVCs? for example: ausearch -m AVC -ts today can be reproduced on pure libvirt env.
versions:
kernel-devel-3.10.0-302.el7.x86_64
libvirt-1.2.17-3.el7.x86_64
qemu-kvm-rhev-2.3.0-14.el7.x86_64
preparation:
I used a iscsi lun as block device (/dev/sdj) and a usb disk as block device (/dev/sdb), and both of them can reproduce this issue with following steps.
steps:
in host:
1. #qemu-img create -f qcow2 /dev/sdj 1G
Formatting '/dev/sdj', fmt=qcow2 size=1073741824 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
2. #virsh destroy vm; virsh edit vm
Add folloiwng:
<disk type='block' device='disk'>
<driver name='qemu' type='qcow2'/>
<source dev='/dev/sdj'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
</disk>
3. #virsh start vm
Domain vm started
in guest
4. #mkfs.xfs /dev/vdb
5. #mount /dev/vdb /mnt
6. #cp /boot/* /mnt
in host:
7. # virsh domblkinfo vm vdb
Capacity: 1073741824
Allocation: 107412992
Physical: 0 <====== physical size is zero
Fixed upstream:
commit 8dc27259255b79758367789ed272e909bdb56735
Author: Peter Krempa <pkrempa>
Date: Fri Aug 7 11:01:49 2015 +0200
qemu: Fix reporting of physical capacity for block devices
Qemu reports physical size 0 for block devices. As 15fa84acbb55ebfee6a4
changed the behavior of qemuDomainGetBlockInfo to just query the monitor
this created a regression since we didn't report the size correctly any
more.
This patch adds code to refresh the physical size of a block device by
opening it and seeking to the end and uses it both in
qemuDomainGetBlockInfo and also in qemuDomainGetStatsOneBlock that was
broken since it was introduced in this respect.
(In reply to Peter Krempa from comment #6) > Fixed upstream: Thanks for the quick fix Peter! According to https://apps.fedoraproject.org/packages/libvirt the broken version is available also in Fedora 23 (and probably in virt-preview on Fedora 22?). Should open a separate Fedora bug, or this fix is also backported to Fedora? Jiri, can you check comment 8? Yeah, file a separate Fedora bug. Verified on:
libvirt-1.2.17-6.el7.x86_64
qemu-kvm-rhev-2.3.0-18.el7.x86_64
Steps:
1. prepared a iscsi lun
# lsscsi
...
[78:0:0:1] disk IET VIRTUAL-DISK 0001 /dev/sdd <=== An iscsi storage
2. # qemu-img create -f qcow2 /dev/sdd 1G
Formatting '/dev/sdd', fmt=qcow2 size=1073741824 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
3. # qemu-img info /dev/sdd
image: /dev/sdd
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 0
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
4. #virsh edit rhel1
add following segment.
<disk type='block' device='disk'>
<driver name='qemu' type='qcow2'/>
<source dev='/dev/sdd'/>
<target dev='vdb' bus='virtio'/>
</disk>
5. # virsh domblkinfo rhel1 vdb
Capacity: 1073741824
Allocation: 0
Physical: 1073741824
Login to guest
6. # fdisk -l
Disk /dev/vda: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
7. #mkfs.xfs /dev/vda
8. #mount /dev/vda /mnt
9. # dd if=/dev/zero of=/mnt/1G.file bs=1M count=1024
dd: error writing ‘/mnt/1G.file’: No space left on device
982+0 records in
981+0 records out
1029517312 bytes (1.0 GB) copied, 4.54759 s, 226 MB/s
Back to host
10. # virsh domblkinfo rhel1 vdb
Capacity: 1073741824
Allocation: 1040776704
Physical: 1073741824 <==== Physical size is correct as expect.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-2202.html Peter hi, Is this bug relevant in Fedora 23 ? Thanks, Fred (In reply to Fred Rolland from comment #15) > Peter hi, > > Is this bug relevant in Fedora 23 ? https://bugzilla.redhat.com/show_bug.cgi?id=1253754 |
Description of problem: Found installing oVirt 3.6.0 (development snapshot). I'm not sure if it is libvirt or QEMU issue. I created a VM using block storage (LVM) and thin-provisioned image (QCOW2). Libvirt misreport the "physical" size of the volume, breaking the transparent volume resizing flow in oVirt. I was building a fresh 3.6.0 devel environment (vdsm/engine from git) on RHEL 7.2 to check virt patches. I'm using ISCSI storage and thin provisioned volumes. The automatic volume extension flow seems broken, meaning the space eventually is exhausted and the VM goes paused. The problem seems to be that libvirt always report physical=0 for the disk. After a freshly started VM: BENji> 09:33:20 root [~]$ virsh -r domblkinfo 2 vda Capacity: 4294967296 Allocation: 952929792 Physical: 0 Same with bulk stats (excerpt) block.1.name=vda block.1.path=/rhev/data-center/00000001-0001-0001-0001-0000000002ae/e1f383a6-7496-425f-bb47-c9310dbdf821/images/bfa8a4a1-c7a4-4c05-9047-87f592870a98/0a82037a-a70d-4173-95c3-a85c62337e55 block.1.rd.reqs=5359 block.1.rd.bytes=99524096 block.1.rd.times=11901668378 block.1.wr.reqs=1786 block.1.wr.bytes=5670400 block.1.wr.times=18790464626 block.1.fl.reqs=88 block.1.fl.times=8583220632 block.1.allocation=952929792 block.1.capacity=4294967296 The same image and same VM run using a RHEL7.1 host: using libvirt 1.2.8-16.el7_1.3 and qemu-kvm-rhev 10:2.1.2-23.el7_1.7 HOji> 10:25:24 root [~]$ virsh -r domblkinfo 2 vda Capacity: 4294967296 Allocation: 1063390720 Physical: 2147483648 Here's how the disk xml looks like: <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='native'/> <source dev='/rhev/data-center/00000001-0001-0001-0001-0000000002ae/e1f383a6-7496-425f-bb47-c9310dbdf821/images/bfa8a4a1-c7a4-4c05-9047-87f592870a98/0a82037a-a70d-4173-95c3-a85c62337e55'/> <backingStore/> <target dev='vda' bus='virtio'/> <serial>bfa8a4a1-c7a4-4c05-9047-87f592870a98</serial> <boot order='1'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> Version-Release number of selected component (if applicable): libvirt 1.2.17-2.el7 qemu-kvm-rhev 10:2.3.0-15.el7 How reproducible: 100% of my tries. Seems to behave the same with vanilla libvirt 1.2.17 from libvirt.org (rebuilt rpms from sources) Steps to Reproduce: 1. create a VM with QCOW2 virtio disk on block storage. I used a 4GB thin provisioned image (2GB actually reserved, see good domblkinfo output above) 2. run out of the space on the volume. I just installed centos7 using all the provided defaults, then tried to run "yum -y" update 3. check the reported pyshical allocation using e.g. virsh domblkinfo Actual results: physical is always zero Expected results: physical reports the right value