Bug 1024159

Summary: pool-info get the wrong 'Allocation' and 'Availabe' after vol-create-as failed
Product: Red Hat Enterprise Linux 6 Reporter: chhu
Component: libvirtAssignee: John Ferlan <jferlan>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 6.6CC: ajia, dyuan, mzhan, rbalakri, shyu, xuzhang
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: libvirt-0.10.2-33.el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-10-14 04:18:12 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description chhu 2013-10-29 02:58:11 UTC
Description of problem:
pool-info get the wrong 'Allocation' and 'Availabe' after vol-create-as failed.

Version-Release number of selected component:
libvirt-0.10.2-29.el6.x86_64
qemu-kvm-0.12.1.2-2.414.el6.x86_64

How reproducible:
100%

Steps:
1. Create a file system pool
# cat testpool.xml
<pool type='fs'>
  <name>TestPool</name>
  <uuid>6bf80895-10b6-75a6-6059-89fdea2aefb7</uuid>
  <source>
    <device path='/dev/sda6'/>
    <format type='auto'/>
  </source>
  <target>
    <path>/var/lib/libvirt/images/TestPool</path>
    <permissions>
      <mode>0755</mode>
      <owner>0</owner>
      <group>0</group>
    </permissions>
  </target>
</pool>
# virsh pool-create testpool.xml
Pool TestPool created from testpool.xml

2. Get the pool info.
# virsh pool-info TestPool
Name:           TestPool
UUID:           6bf80895-10b6-75a6-6059-89fdea2aefb7
State:          running
Persistent:     no
Autostart:      no
Capacity:       99.96 GiB
Allocation:     19.46 GiB
Available:      80.49 GiB

3. Try to create a volume larger than the Available space.
# virsh vol-create-as TestPool test1.img 100G
error: Failed to create vol test1.img
error: cannot fill file '/var/lib/libvirt/images/TestPool/test1.img': No space left on device

4. Run the pool-info again, get the wrong 'Allocation' and 'Availabe'
# virsh pool-info TestPool
Name:           TestPool
UUID:           6bf80895-10b6-75a6-6059-89fdea2aefb7
State:          running
Persistent:     no
Autostart:      no
Capacity:       99.96 GiB
Allocation:     16777215.92 TiB
Available:      180.49 GiB

# df -lh
Filesystem            Size  Used Avail Use% Mounted on
......
/dev/sda6             100G   20G   81G  20% /var/lib/libvirt/images/TestPool

Actual results:
In step4, the pool-info get wrong 'Allocation' and 'Availabe'.

Expected results:
In step4, the pool-info should get correct 'Allocation' and 'Availabe'.

Comment 2 John Ferlan 2014-04-09 13:24:19 UTC
I pushed an upstream fix for this now - working through the back port.  Bug was caused by storageVol[ume]Delete() logic when run from the CreateXML[From] API's where the size of the pool was adjusted to account for removal of pool element even though the CreateXML[From] code hadn't yet accounted for the addition of the pool element.  Fix was to conditionally adjust on deletion if from CreateXML[From] API paths.

Upstream fix commit id is '0c2305b31c283bc98cab7261d3021ce1a9a0b713'.

Adjusted the testpool.xml a bit:

# cat testpool.xml
<pool type='dir'>
  <name>TestPool</name>
  <uuid>6bf80895-10b6-75a6-6059-89fdea2aefb7</uuid>
  <source>
  </source>
  <target>
    <path>/var/lib/libvirt/images/TestPool</path>
    <permissions>
      <mode>0755</mode>
      <owner>0</owner>
      <group>0</group>
    </permissions>
  </target>
</pool>

# mkdir /var/lib/libvirt/images/TestPool

# virsh pool-create testpool.xml
Pool TestPool created from testpool.xml

# virsh pool-info TestPool
Name:           TestPool
UUID:           6bf80895-10b6-75a6-6059-89fdea2aefb7
State:          running
Persistent:     no
Autostart:      no
Capacity:       134.45 GiB
Allocation:     22.75 GiB
Available:      111.70 GiB

# virsh vol-create-as TestPool test1.img 200G
error: Failed to create vol test1.img
error: cannot allocate 214748364800 bytes in file '/home/bz1024159/TestPool/test1.img': No space left on device

# virsh pool-info TestPool
Name:           TestPool
UUID:           6bf80895-10b6-75a6-6059-89fdea2aefb7
State:          running
Persistent:     no
Autostart:      no
Capacity:       134.45 GiB
Allocation:     22.75 GiB
Available:      111.70 GiB

# virsh pool-destroy TestPool
Pool TestPool destroyed

Comment 6 Xuesong Zhang 2014-07-14 09:09:48 UTC
Verify this bug with libvirt-0.10.2-40.el6.x86_64, this bug is fixed.

Steps:
1. Create a file system pool
# cat testpool.xml
<pool type='fs'>
  <name>TestPool</name>
  <uuid>6bf80895-10b6-75a6-6059-89fdea2aefb7</uuid>
  <source>
    <device path='/dev/sda7'/>
    <format type='auto'/>
  </source>
  <target>
    <path>/mnt</path>
    <permissions>
      <mode>0755</mode>
      <owner>0</owner>
      <group>0</group>
    </permissions>
  </target>
</pool>
# virsh pool-create testpool.xml
Pool TestPool created from testpool.xml

2. Get the pool info.
# virsh pool-info TestPool
Name:           TestPool
UUID:           6bf80895-10b6-75a6-6059-89fdea2aefb7
State:          running
Persistent:     no
Autostart:      no
Capacity:       19.69 GiB
Allocation:     172.07 MiB
Available:      19.52 GiB

3. Try to create a volume larger than the Available space.
# virsh vol-create-as TestPool test2.img 100G
error: Failed to create vol test2.img
error: cannot fill file '/mnt/test2.img': No space left on device

4. Run the pool-info again, the info 'Allocation' and 'Availabe' is same with step 2.
# virsh pool-info TestPool
Name:           TestPool
UUID:           6bf80895-10b6-75a6-6059-89fdea2aefb7
State:          running
Persistent:     no
Autostart:      no
Capacity:       19.69 GiB
Allocation:     172.07 MiB
Available:      19.52 GiB

# df -lh
Filesystem      Size  Used Avail Use% Mounted on
......
/dev/sda7        20G  173M   19G   1% /mnt

Comment 8 errata-xmlrpc 2014-10-14 04:18:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1374.html