RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1470127 - RHEL7.4: virDomainGetBlockInfo always returns alloc=0 on block storage
Summary: RHEL7.4: virDomainGetBlockInfo always returns alloc=0 on block storage
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.4
Hardware: x86_64
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: John Ferlan
QA Contact: Han Han
URL:
Whiteboard:
Depends On: 1467826 1473706
Blocks: 1461536 1465539
TreeView+ depends on / blocked
 
Reported: 2017-07-12 12:28 UTC by Jaroslav Reznik
Modified: 2019-04-28 13:15 UTC (History)
27 users (show)

Fixed In Version: libvirt-3.2.0-14.el7_4.2
Doc Type: Bug Fix
Doc Text:
Cause: Editing error while making changes to the code. Consequence: For a block device, virDomainGetBlockInfo always returned 0 for the allocation value of a sparse target device for an active QEMU domain. Fix: Remove the extraneous line. Result: The allocation value for a sparse block device target for an active QEMU domain will have the correct value as determined at the time of statistic collection.
Clone Of: 1467826
Environment:
Last Closed: 2017-08-01 11:30:30 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3132861 0 None None None 2017-07-31 17:10:31 UTC
Red Hat Product Errata RHBA-2017:2334 0 normal SHIPPED_LIVE libvirt bug fix update 2017-08-14 22:56:05 UTC

Description Jaroslav Reznik 2017-07-12 12:28:08 UTC
This bug has been copied from bug #1467826 and has been proposed to be backported to 7.4 z-stream (EUS).

Comment 9 Elad 2017-07-12 22:38:32 UTC
Tested libvirt build from https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=13642285

Created a VM in RHV with OS disk and an additional thin provision disk (1G LV which can be extended up to 10G) reside on iSCSI:
virsh # dumpxml elad-test1

<domain type='kvm' id='5'>
  <name>elad-test1</name>
  <uuid>323f6481-9a94-4ffa-9d14-0ea450216de1</uuid>
  <metadata xmlns:ovirt="http://ovirt.org/vm/tune/1.0">
    <ovirt:qos/>
  </metadata>
  <maxMemory slots='16' unit='KiB'>16777216</maxMemory>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <vcpu placement='static' current='2'>16</vcpu>
  <cputune>
    <shares>1020</shares>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>Red Hat</entry>
      <entry name='product'>RHEV Hypervisor</entry>
      <entry name='version'>7.4-18.el7</entry>
      <entry name='serial'>2f2205c7-55fd-4659-8e78-f4251e081345</entry>
      <entry name='uuid'>323f6481-9a94-4ffa-9d14-0ea450216de1</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.3.0'>hvm</type>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
  </features>
  <cpu mode='custom' match='exact' check='full'>
    <model fallback='forbid'>Nehalem</model>
    <topology sockets='16' cores='1' threads='1'/>
    <feature policy='require' name='vme'/>
    <feature policy='require' name='x2apic'/>
    <feature policy='require' name='hypervisor'/>
    <numa>
      <cell id='0' cpus='0-1' memory='4194304' unit='KiB'/>
    </numa>
  </cpu>
  <clock offset='variable' adjustment='0' basis='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source startupPolicy='optional'/>
      <backingStore/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <alias name='ide0-1-0'/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/run/vdsm/payload/323f6481-9a94-4ffa-9d14-0ea450216de1.fa97e9451a04ba34821af00a3cef0af7.img' startupPolicy='optional'/>
      <backingStore/>
      <target dev='hdd' bus='ide'/>
      <readonly/>
      <alias name='ide0-1-1'/>
      <address type='drive' controller='0' bus='1' target='0' unit='1'/>
    </disk>
    <disk type='block' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='native'/>
      <source dev='/rhev/data-center/cc5c839e-0167-4785-995c-34490c3603f7/4a68c9fb-a7a0-414a-8523-5806efd276fc/images/e38fc83d-1ffc-4f61-82f1-cba21cee5d49/ee8c8715-6bbd-44af-9afe-470a6bfcbb03'/>
      <backingStore type='block' index='1'>
        <format type='qcow2'/>
        <source dev='/rhev/data-center/cc5c839e-0167-4785-995c-34490c3603f7/4a68c9fb-a7a0-414a-8523-5806efd276fc/images/e38fc83d-1ffc-4f61-82f1-cba21cee5d49/577903ca-b37d-4829-ad8b-3a124c34dfee'/>
        <backingStore/>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <serial>e38fc83d-1ffc-4f61-82f1-cba21cee5d49</serial>
      <boot order='1'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
    <disk type='block' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='native'/>
      <source dev='/rhev/data-center/cc5c839e-0167-4785-995c-34490c3603f7/4a68c9fb-a7a0-414a-8523-5806efd276fc/images/a5d20a77-f7ee-4c80-b009-3f2fe624a1b6/9ed3ba9d-5c24-4cfb-8f08-b2d3c68d830e'/>
      <backingStore/>
      <target dev='sda' bus='scsi'/>
      <serial>a5d20a77-f7ee-4c80-b009-3f2fe624a1b6</serial>
      <alias name='scsi0-0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0' model='piix3-uhci'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0' ports='16'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <interface type='bridge'>
      <mac address='00:1a:4a:16:98:22'/>
      <source bridge='rhevm'/>
      <target dev='vnet1'/>
      <model type='virtio'/>
      <filterref filter='vdsm-no-mac-spoofing'/>
      <link state='up'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/323f6481-9a94-4ffa-9d14-0ea450216de1.com.redhat.rhevm.vdsm'/>
      <target type='virtio' name='com.redhat.rhevm.vdsm' state='connected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/323f6481-9a94-4ffa-9d14-0ea450216de1.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
      <alias name='channel1'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0' state='disconnected'/>
      <alias name='channel2'/>
      <address type='virtio-serial' controller='0' bus='0' port='3'/>
    </channel>
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    </input>
    <graphics type='spice' tlsPort='5901' autoport='yes' listen='10.35.68.9' defaultMode='secure' passwdValidTo='1970-01-01T00:00:01'>
      <listen type='network' address='10.35.68.9' network='vdsm-rhevm'/>
      <channel name='main' mode='secure'/>
      <channel name='display' mode='secure'/>
      <channel name='inputs' mode='secure'/>
      <channel name='cursor' mode='secure'/>
      <channel name='playback' mode='secure'/>
      <channel name='record' mode='secure'/>
      <channel name='smartcard' mode='secure'/>
      <channel name='usbredir' mode='secure'/>
    </graphics>
    <sound model='ich6'>
      <alias name='sound0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </sound>
    <video>
      <model type='qxl' ram='65536' vram='8192' vgamem='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c460,c533</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c460,c533</imagelabel>
  </seclabel>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+107:+107</label>
    <imagelabel>+107:+107</imagelabel>
  </seclabel>
</domain>



On the guest, wrote data with dd to the thin disk:


[root@vm-70-34 ~]# time dd if=/dev/zero of=/dev/sda bs=8M count=500 oflag=direct
500+0 records in
500+0 records out
4194304000 bytes (4.2 GB) copied, 6.20681 s, 676 MB/s

real    0m6.209s
user    0m0.003s
sys     0m0.702s



Allocation is not zero when volume extension is needed:

virsh # domblkinfo elad-test1 sda
Capacity:       10737418240
Allocation:     2111913984
Physical:       4294967296

virsh # domblkinfo elad-test1 sda
Capacity:       10737418240
Allocation:     3331063808
Physical:       4294967296



VDSM:


2017-07-13 01:27:17,714+0300 INFO  (periodic/1) [virt.vm] (vmId='323f6481-9a94-4ffa-9d14-0ea450216de1') Requesting extension for volume 9ed3ba9d-5c24-4cfb-8f08-b2d3c68d830e on domain 4a68c9
fb-a7a0-414a-8523-5806efd276fc (apparent: 4294967296, capacity: 10737418240, allocated: 3065995264, physical: 4294967296) (vm:909)



Used:
libvirt-daemon-driver-secret-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-storage-logical-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-debuginfo-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-storage-core-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-config-nwfilter-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-storage-mpath-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-admin-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-network-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-interface-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-config-network-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-storage-disk-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-storage-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-kvm-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-login-shell-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-libs-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-nwfilter-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-lxc-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-storage-rbd-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-storage-gluster-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-client-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-nss-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-devel-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-python-3.2.0-3.el7.x86_64
libvirt-daemon-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-qemu-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-storage-scsi-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-lock-sanlock-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-nodedev-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-storage-iscsi-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-lxc-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-docs-3.2.0-15.el7_rc.9a49cd47ef.x86_64
qemu-kvm-rhev-2.9.0-14.el7.x86_64
qemu-kvm-common-rhev-2.9.0-14.el7.x86_64
qemu-kvm-tools-rhev-2.9.0-14.el7.x86_64
qemu-img-rhev-2.9.0-14.el7.x86_64
vdsm-4.19.20-1.el7ev.x86_64

Comment 10 Allon Mureinik 2017-07-13 05:54:51 UTC
Elad, can we please rerun the scenario from bug 1461536 and confirm that this libvirt scratch build solves that issue too?

Comment 11 Nir Soffer 2017-07-13 10:58:09 UTC
This seems to be fix, but see bug 1470634.

Comment 12 Jaroslav Suchanek 2017-07-13 12:14:54 UTC
(In reply to Nir Soffer from comment #11)
> This seems to be fix, but see bug 1470634.

Okey, lets check it, moving back to ASSIGNED for now.

Comment 13 Elad 2017-07-13 12:32:23 UTC
(In reply to Allon Mureinik from comment #10)
> Elad, can we please rerun the scenario from bug 1461536 and confirm that
> this libvirt scratch build solves that issue too?


Executed https://polarion.engineering.redhat.com/polarion/#/project/RHEVM3/workitem?id=RHEVM3-5063 over iSCSI which its failure was reported in bug 1461536. It passed.




2017-07-13 15:22:21,937 - MainThread - rhevmtests.storage.helpers - INFO - Creating partition /sbin/parted /dev/vdb --script -- mkpart primary ext4 0 100%
2017-07-13 15:22:23,392 - MainThread - rhevmtests.storage.helpers - INFO - Output after creating partition: 
2017-07-13 15:22:23,393 - MainThread - rhevmtests.storage.helpers - INFO - Creating a File-system on first partition
2017-07-13 15:22:32,232 - MainThread - rhevmtests.storage.helpers - INFO - Performing command '/bin/dd bs=1M count=1358 if=/dev/vda of=/mount-point_1315222664/written_test_storage status=none'
2017-07-13 15:23:17,546 - MainThread - rhevmtests.storage.helpers - INFO - Output for dd: 
2017-07-13 15:23:18,365 - MainThread - root - INFO - Get data center object by key name with {'datacenter': 'golden_env_mixed'}
2017-07-13 15:23:18,366 - MainThread - art.ll_lib.dcs - INFO - Get datacenter golden_env_mixed by name
2017-07-13 15:23:18,457 - MainThread - root - INFO - Get host resource by host_name host_mixed_1
2017-07-13 15:23:19,749 - MainThread - VDS - INFO - [10.35.82.87] Executing command python -c from vdsm import client;cli = client.connect('localhost', 54321, use_tls=True);print cli.Volume.getInfo(**{'storagepoolID': '3e4cec6f-2b10-4b54-b36b-7f0dc7f8ccfd', 'imageID': '2ff3e7f9-d06f-471e-bd5e-b3f2eaa09ef6', 'volumeID': 'd33715cf-b1cc-49e6-bc66-95bccee0ee2a', 'storagedomainID': '7a9144a8-fbf1-4b25-8794-299711457e6b'})
2017-07-13 15:23:22,554 - MainThread - art.ll_lib.vms - INFO - Get VM vm_TestCase5063_REST_ISCSI_1315151855 host
2017-07-13 15:23:22,928 - MainThread - root - INFO - Get host host_mixed_2 IP from engine by host host_mixed_2 name
2017-07-13 15:23:22,932 - MainThread - root - INFO - Get host object by host_name host_mixed_2. with {'attribute': 'name'}
2017-07-13 15:23:23,604 - MainThread - art.ll_lib.vms - INFO - Waiting for IP from vm_TestCase5063_REST_ISCSI_1315151855
2017-07-13 15:23:26,018 - MainThread - VDS - INFO - [10.35.82.88] Executing command python -c from vdsm import client;cli = client.connect('localhost', 54321, use_tls=True);print cli.Host.getVMFullList(**{})
2017-07-13 15:23:29,435 - MainThread - VDS - INFO - [10.35.82.88] Executing command python -c from vdsm import client;cli = client.connect('localhost', 54321, use_tls=True);print cli.VM.getStats(**{'vmID': 'b9f1a316-0051-4abc-941c-fff2a9326f71'})
2017-07-13 15:23:31,870 - MainThread - art.ll_lib.vms - INFO - Send ICMP to 10.35.83.10
2017-07-13 15:23:39,082 - MainThread - util - INFO - Boot device is: /dev/vda1
2017-07-13 15:23:41,548 - MainThread - art.ll_lib.vms - INFO - Get VM vm_TestCase5063_REST_ISCSI_1315151855 host
2017-07-13 15:23:41,923 - MainThread - root - INFO - Get host host_mixed_2 IP from engine by host host_mixed_2 name
2017-07-13 15:23:41,928 - MainThread - root - INFO - Get host object by host_name host_mixed_2. with {'attribute': 'name'}
2017-07-13 15:23:42,081 - MainThread - art.ll_lib.vms - INFO - Waiting for IP from vm_TestCase5063_REST_ISCSI_1315151855
2017-07-13 15:23:44,720 - MainThread - VDS - INFO - [10.35.82.88] Executing command python -c from vdsm import client;cli = client.connect('localhost', 54321, use_tls=True);print cli.Host.getVMFullList(**{})
2017-07-13 15:23:48,182 - MainThread - VDS - INFO - [10.35.82.88] Executing command python -c from vdsm import client;cli = client.connect('localhost', 54321, use_tls=True);print cli.VM.getStats(**{'vmID': 'b9f1a316-0051-4abc-941c-fff2a9326f71'})
2017-07-13 15:23:50,314 - MainThread - art.ll_lib.vms - INFO - Send ICMP to 10.35.83.10
2017-07-13 15:23:57,037 - MainThread - rhevmtests.storage.storage_virtual_disk_resize.helpers - INFO - Device vdb size: 2 GB
2017-07-13 15:23:57,044 - MainThread - art.logging - INFO - Status: passed




2017-07-13 15:23:05,762+0300 INFO  (mailbox-spm/0) [storage.SPM.Messages.Extend] processRequest: extending volume d33715cf-b1cc-49e6-bc66-95bccee0ee2a in domain 7a9144a8-fbf1-4b25-8794-2997
11457e6b (pool 3e4cec6f-2b10-4b54-b36b-7f0dc7f8ccfd) to size 2048 (mailbox:161)
2017-07-13 15:23:05,762+0300 DEBUG (mailbox-spm/0) [storage.StorageDomainManifest] Extending thinly-provisioned LV for volume d33715cf-b1cc-49e6-bc66-95bccee0ee2a to 2048 MB (blockSD:455)
2017-07-13 15:23:05,763+0300 INFO  (mailbox-spm/0) [storage.LVM] Extending LV 7a9144a8-fbf1-4b25-8794-299711457e6b/d33715cf-b1cc-49e6-bc66-95bccee0ee2a to 2048 megabytes (lvm:1213)
2017-07-13 15:23:05,764+0300 DEBUG (mailbox-spm/0) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-0 /usr/bin/sudo -n /usr/sbin/lvm lvextend --config ' devices { preferred_names = ["^/d
ev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/3514f0c5a516004a0|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  pri
oritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --autobackup n --size 2048m 7a9144a8-fbf1-4b25-8794-299711457e6b/d33715cf-b1cc-49e6-
bc66-95bccee0ee2a (cwd None) (commands:70)



Used:
libvirt-daemon-driver-storage-core-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-config-nwfilter-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-storage-mpath-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-admin-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-interface-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-storage-iscsi-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-kvm-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-libs-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-nwfilter-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-lxc-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-storage-rbd-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-storage-gluster-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-client-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-nss-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-devel-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-python-3.2.0-3.el7.x86_64
libvirt-daemon-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-secret-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-qemu-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-storage-disk-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-storage-scsi-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-lock-sanlock-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-debuginfo-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-nodedev-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-storage-logical-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-lxc-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-docs-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-network-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-config-network-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-daemon-driver-storage-3.2.0-15.el7_rc.9a49cd47ef.x86_64
libvirt-login-shell-3.2.0-15.el7_rc.9a49cd47ef.x86_64
vdsm-4.20.1-176.gitecbab6b.el7.centos.x86_64
qemu-kvm-ev-2.6.0-28.el7.10.1.x86_64

Comment 14 Jaroslav Suchanek 2017-07-17 11:41:12 UTC
Moving back to POST.

Comment 19 Han Han 2017-07-19 06:39:19 UTC
Verified on libvirt-3.2.0-14.el7_4.2.x86_64 qemu-kvm-rhev-2.9.0-16.el7_4.2.x86_64
1. Prepare a logical volume
# lsblk /dev/mapper/VG-lv1
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
VG-lv1 253:5    0   5G  0 lvm

2. Start a VM with healthy OS
# virsh list                  
 Id    Name                           State
----------------------------------------------------
 13    HH                             running

3. Attach the logical volume then check 'Allocation' value
# virsh attach-disk HH /dev/mapper/VG-lv1 vdb                                     
Disk attached successfully

# virsh -r domblkinfo HH vdb                 
Capacity:       5368709120
Allocation:     0
Physical:       5368709120

4. Write some data in VM check the data size is equal to the 'Allocation' value:
(in vm)# dd if=/dev/urandom of=/dev/vdb bs=10M count=5                                                                                 
5+0 records in
5+0 records out
52428800 bytes (52 MB) copied, 4.30506 s, 12.2 MB/s

# virsh -r domblkinfo HH vdb                 
Capacity:       5368709120
Allocation:     52428800
Physical:       5368709120

The result is as expected.

Comment 20 Han Han 2017-07-20 05:51:28 UTC
Elad,
Please help check if the bug is fixed on RHV env by libvirt-3.2.0-14.el7_4.2

Comment 21 Elad 2017-07-23 12:05:18 UTC
Tested using libvirt-3.2.0-14.el7_4.2 from http://download-node-02.eng.bos.redhat.com/brewroot/packages/libvirt/3.2.0/14.el7_4.2/x86_64/

Created a VM in RHV with OS disk and an additional thin provision disk (1G LV which can be extended up to 10G) reside on iSCSI:

virsh # dumpxml test
<domain type='kvm' id='1'>
  <name>test</name>
  <uuid>ea32bbc5-9990-4f75-98f0-3edef1bdf743</uuid>
  <metadata xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
    <ovirt-tune:qos/>
    <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
    <ovirt-vm:agentChannelName>ovirt-guest-agent.0</ovirt-vm:agentChannelName>
    <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>
    <ovirt-vm:startTime type="float">1500810956.03</ovirt-vm:startTime>
</ovirt-vm:vm>
  </metadata>
  <maxMemory slots='16' unit='KiB'>4194304</maxMemory>
  <memory unit='KiB'>1048576</memory>
  <currentMemory unit='KiB'>1048576</currentMemory>
  <vcpu placement='static' current='1'>16</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>oVirt</entry>
      <entry name='product'>oVirt Node</entry>
      <entry name='version'>7.4-18.el7</entry>
      <entry name='serial'>41C000C8-2706-4EDF-859D-3668FF49320B</entry>
      <entry name='uuid'>ea32bbc5-9990-4f75-98f0-3edef1bdf743</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.3.0'>hvm</type>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
  </features>
  <cpu mode='custom' match='exact' check='partial'>
    <model fallback='forbid'>Conroe</model>
    <topology sockets='16' cores='1' threads='1'/>
    <numa>
      <cell id='0' cpus='0' memory='1048576' unit='KiB'/>
    </numa>
  </cpu>
  <clock offset='variable' adjustment='0' basis='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source startupPolicy='optional'/>
      <backingStore/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <alias name='ide0-1-0'/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>
    <disk type='block' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='native'/>
      <source dev='/rhev/data-center/mnt/blockSD/3be253ff-0b21-4a79-950c-538954bd7f79/images/5b9fc7df-4142-4dc0-abbd-5f122c1228a2/5f2f651e-247d-48fb-89fd-04737c880f9c'/>
      <backingStore/>
      <target dev='sda' bus='scsi'/>
      <serial>5b9fc7df-4142-4dc0-abbd-5f122c1228a2</serial>
      <boot order='1'/>
      <alias name='scsi0-0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='block' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='native'/>
      <source dev='/rhev/data-center/mnt/blockSD/3be253ff-0b21-4a79-950c-538954bd7f79/images/1bec3c4e-0143-491b-882e-fd0b4bbf0f23/2aca040f-6822-4601-a0eb-d432d96aac56'/>
      <backingStore type='block' index='1'>
        <format type='qcow2'/>
        <source dev='/rhev/data-center/mnt/blockSD/3be253ff-0b21-4a79-950c-538954bd7f79/images/1bec3c4e-0143-491b-882e-fd0b4bbf0f23/2d8e8517-ceed-4cf9-965f-5c1aa13beef1'/>
        <backingStore/>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <serial>1bec3c4e-0143-491b-882e-fd0b4bbf0f23</serial>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='piix3-uhci'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0' ports='16'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <interface type='bridge'>
      <mac address='00:1a:4a:16:25:8d'/>
      <source bridge='ovirtmgmt'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <filterref filter='vdsm-no-mac-spoofing'/>
      <link state='up'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/ea32bbc5-9990-4f75-98f0-3edef1bdf743.ovirt-guest-agent.0'/>
      <target type='virtio' name='ovirt-guest-agent.0'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/ea32bbc5-9990-4f75-98f0-3edef1bdf743.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <alias name='channel1'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0'/>
      <alias name='channel2'/>
      <address type='virtio-serial' controller='0' bus='0' port='3'/>
    </channel>
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    </input>
    <graphics type='spice' tlsPort='5900' autoport='yes' listen='10.35.82.56' defaultMode='secure' passwdValidTo='1970-01-01T00:00:01'>
      <listen type='network' address='10.35.82.56' network='vdsm-ovirtmgmt'/>
      <channel name='main' mode='secure'/>
      <channel name='display' mode='secure'/>
      <channel name='inputs' mode='secure'/>
      <channel name='cursor' mode='secure'/>
      <channel name='playback' mode='secure'/>
      <channel name='record' mode='secure'/>
      <channel name='smartcard' mode='secure'/>
      <channel name='usbredir' mode='secure'/>
    </graphics>
    <sound model='ich6'>
      <alias name='sound0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </sound>
    <video>
      <model type='qxl' ram='65536' vram='8192' vgamem='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <alias name='rng0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </rng>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c38,c442</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c38,c442</imagelabel>
  </seclabel>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+107:+107</label>
    <imagelabel>+107:+107</imagelabel>
  </seclabel>
</domain>



Checked libvirt allocation:

virsh # domblkinfo test sda
Capacity:       10737418240
Allocation:     0
Physical:       1073741824


On the guest wrote some data with dd:

[root@localhost ~]# time dd if=/dev/zero of=/dev/sda bs=8M count=500 oflag=direct
500+0 records in
500+0 records out
4194304000 bytes (4.2 GB) copied, 26.2822 s, 160 MB/s

real    0m26.300s
user    0m0.007s
sys     0m2.010s



Checked libvirt allocation again:

Every 2.0s: virsh -r domblkinfo test sda                                                                                                                             Sun Jul 23 15:03:50 2017

Capacity:       10737418240
Allocation:     600702976
Physical:       1073741824

Comment 22 errata-xmlrpc 2017-08-01 11:30:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2334


Note You need to log in before you can comment on or make changes to this bug.