RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 670529 - LVM storage pool creates smaller-than-expected volume
Summary: LVM storage pool creates smaller-than-expected volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.1
Hardware: Unspecified
OS: Unspecified
low
medium
Target Milestone: rc
: ---
Assignee: Osier Yang
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-01-18 15:39 UTC by Matthew Booth
Modified: 2011-05-19 13:25 UTC (History)
7 users (show)

Fixed In Version: libvirt-0.8.7-4.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-05-19 13:25:47 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2011:0596 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2011-05-18 17:56:36 UTC

Description Matthew Booth 2011-01-18 15:39:12 UTC
Description of problem:
I have some code which creates a volume with:
        <volume>
            <name>$name</name>
            <capacity>$size</capacity>
            <allocation>$allocation</allocation>
            <target>
                <format type='$format'/>
            </target>
        </volume>

When the storage pool is LVM-backed, and size is 2147483649 bytes, the volume is created successfully, but the size is 2147483648 bytes (1 byte too small). This results in an error when I try to write 2147483649 bytes to the newly created volume.

danpb points out the offending code does:
  snprintf(size, sizeof(size)-1, "%lluK", vol->capacity/1024);

The rounding is due to the 1k granularity of an LVM volume. However, this code rounds down. It would be safer if it rounded up.

Version-Release number of selected component (if applicable):
Actually reproduced in libvirt-0.8.3-2.fc14.x86_64, but present in RHEL 6.

Comment 2 Osier Yang 2011-01-22 11:18:52 UTC
http://www.redhat.com/archives/libvir-list/2011-January/msg00932.html

patch posted to upstream.

Comment 3 Osier Yang 2011-01-27 09:20:41 UTC
http://post-office.corp.redhat.com/archives/rhvirt-patches/2011-January/msg01409.html

patch sent to rhvirt-patches

Comment 6 Min Zhan 2011-01-28 09:57:36 UTC
Verified with Passed with below environment:
# uname -a
Linux dhcp-65-85.nay.redhat.com 2.6.32-99.el6.x86_64 #1 SMP Fri Jan 14 10:46:00
EST 2011 x86_64 x86_64 x86_64 GNU/Linux

libvirt-0.8.7-4.el6.x86_64
kernel-2.6.32-99.el6.x86_64
qemu-kvm-0.12.1.2-2.132.el6.x86_64

Steps:
1. Create a LVM storage pool
# cat test.xml
     <pool type='logical'>
       <name>test</name>
       <source>
         <name>test</name>
         <format type='lvm2'/>
         <device path='/dev/sda5'/>
       </source>
       <target>
         <path>/dev/test</path>
       </target>
     </pool>

# virsh pool-define test.xml
Pool test defined from test.xml

# virsh pool-build test
Pool test built

# virsh pool-start test
Pool test started

# virsh pool-list --all
Name                 State      Autostart 
-----------------------------------------
default              active     yes       
pool-mpath           inactive   no        
test                 active     no  

# virsh pool-dumpxml test
<pool type='logical'>
  <name>test</name>
  <uuid>d54b19d3-339c-112c-27f8-11fff54381e1</uuid>
  <capacity>31448891392</capacity>
  <allocation>0</allocation>
  <available>31448891392</available>
  <source>
    <device path='/dev/sda5'/>
    <name>test</name>
    <format type='lvm2'/>
  </source>
  <target>
    <path>/dev/test</path>
    <permissions>
      <mode>0700</mode>
      <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
</pool>

2. Create a volume in lvm pool.
# cat vol.xml
       <volume>
            <name>vol</name>
            <capacity>2147483649</capacity>
            <allocation>2147483649</allocation>
            <target>
                <format type='raw'/>
            </target>
        </volume>

# virsh vol-create test vol.xml 
Vol vol created from vol.xml

[root@dhcp-65-85 ~]# virsh vol-dumpxml --pool test vol
<volume>
  <name>vol</name>
  <key>wQ7Ilp-unr9-ED1J-LqmN-3Bm5-5rYF-iTOlfC</key>
  <source>
    <device path='/dev/sda5'>
      <extent start='0' end='2151677952'/>
    </device>
  </source>
  <capacity>2151677952</capacity>
  <allocation>2151677952</allocation>
  <target>
    <path>/dev/test/vol</path>
    <permissions>
      <mode>0600</mode>
      <owner>0</owner>
      <group>6</group>
      <label>system_u:object_r:fixed_disk_device_t:s0</label>
    </permissions>
  </target>
</volume>

3. According to this bug description, 2147483649 / 1024= 2097152.0009 kb, it should round up,so correct capacity and allocation should be 2097153 kb, that is 2097153*1024=2147484672.

I also have some debug info below that the allocated size is correct, 2097153. So I thinks the patch fixed the bug. 

....
Jan 29 00:15:35 dhcp-65-85 libvirtd: 00:15:35.822: 26681: warning : storageVolumeCreateXML:1301 :  voldef->capacity: 2147483649
Jan 29 00:15:35 dhcp-65-85 libvirtd: 00:15:35.822: 26681: warning : virStorageBackendLogicalCreateVol:607 :  vol->capacity: 2147483649
Jan 29 00:15:35 dhcp-65-85 libvirtd: 00:15:35.822: 26681: warning : virStorageBackendLogicalCreateVol:612 :  capacity: 2097153
....

But as we see in volume dumpxml, the actual allocation number display as 2151677952 / 1024= 2101248 kb, which is much bigger than the expected 2097153 kb. About the incorrect display problem I will file a new bug 673455 for this .


----------------
Also, I can reproduce this bug 670529 with libvirt-0.8.1-27.el6.x86_64,libvirt-0.8.7-3.el6.x86_64. When I use # virsh vol-dumpxml, the capacity and allocation are both 2147483648.

# virsh vol-dumpxml --pool test vol
<volume>
  <name>vol</name>
  <key>vIbXY1-LTcK-1zY7-kumN-KOm7-3UAQ-BZ1N8x</key>
  <source>
    <device path='/dev/sda5'>
      <extent start='0' end='2147483648'/>
    </device>
  </source>
  <capacity>2147483648</capacity>
  <allocation>2147483648</allocation>
  <target>
    <path>/dev/test/vol</path>
    <permissions>
      <mode>0600</mode>
      <owner>0</owner>
      <group>6</group>
      <label>system_u:object_r:fixed_disk_device_t:s0</label>
    </permissions>
  </target>
</volume>

Comment 9 errata-xmlrpc 2011-05-19 13:25:47 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2011-0596.html


Note You need to log in before you can comment on or make changes to this bug.