Bug 1206521
| Summary: | 'pool-info' get wrong allocation and available value in disk pool | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Yang Yang <yanyang> |
| Component: | libvirt | Assignee: | John Ferlan <jferlan> |
| Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
| Severity: | low | Docs Contact: | |
| Priority: | low | ||
| Version: | 7.2 | CC: | dyuan, jferlan, mzhan, rbalakri, shyu, xuzhang, yisun |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | libvirt-1.2.16-1.el7 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-11-19 06:25:51 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
The "cause" of the issue is the disk backend will adjust the pool availability and allocated values after creating the partition and reading the size of the partition... Then the storage driver will adjust the same values again. Looks like we have to teach the storage driver to not modify the values if the backend does or the disk backend shouldn't be modifying the values - now to figure out which is "right" Patch posted upstream: http://www.redhat.com/archives/libvir-list/2015-March/msg01479.html Also determined that delete has some interesting side effects, so while not required, the following patch would be a "companion" of sorts: http://www.redhat.com/archives/libvir-list/2015-March/msg01478.html Patches reviewed and pushed upstream:
$ git describe 2ac0e647bdd33d93a374e7ef3eadf2a253c7bf79
v1.2.14-83-g2ac0e64
$ git show 2ac0e647bdd33d93a374e7ef3eadf2a253c7bf79
commit 2ac0e647bdd33d93a374e7ef3eadf2a253c7bf79
Author: John Ferlan <jferlan>
Date: Fri Mar 27 11:19:54 2015 -0400
storage: Don't duplicate efforts of backend driver
https://bugzilla.redhat.com/show_bug.cgi?id=1206521
If the backend driver updates the pool available and/or allocation values,
then the storage_driver VolCreateXML, VolCreateXMLFrom, and VolDelete APIs
should not change the value; otherwise, it will appear as if the values
were "doubled" for each change. Additionally since unsigned arithmetic will
be used depending on the size and operation, either or both values could be
appear to be much larger than they should be (in the EiB range).
Currently only the disk pool updates the values, but other pools could.
Assume a "fresh" disk pool of 500 MiB using /dev/sde:
$ virsh pool-info disk-pool
...
Capacity: 509.88 MiB
Allocation: 0.00 B
Available: 509.84 MiB
$ virsh vol-create-as disk-pool sde1 --capacity 300M
$ virsh pool-info disk-pool
...
Capacity: 509.88 MiB
Allocation: 600.47 MiB
Available: 16.00 EiB
Following assumes disk backend updated to refresh the disk pool at deletion
of primary partition as well as extended partition:
$ virsh vol-delete --pool disk-pool sde1
Vol sde1 deleted
$ virsh pool-info disk-pool
...
Capacity: 509.88 MiB
Allocation: 9.73 EiB
Available: 6.27 EiB
This patch will check if the backend updated the pool values and honor that
update.
Based on changes proposed and made for bug 1224018, I reverted the above change and replaced it with another change, so this needs to be tested again. Reverted commit id: '6727bfd72' Replacement commit id: '48809204' commit 48809204d16348c540ac9571b517312c30da21da Author: John Ferlan <jferlan> Date: Tue May 26 09:23:28 2015 -0400 storage: Don't adjust pool alloc/avail values for disk backend Commit id '2ac0e647' for https://bugzilla.redhat.com/show_bug.cgi?id=1206521 was meant to be a generic check for the CreateVol, CreateVolFrom, and DeleteVol paths to check if the storage backend's changed the pool's view of allocation or available values. Unfortunately as it turns out this caused a side effect when the disk backend created an extended partition there would be no actual storage removed from the pool, thus the changes would not find any change in allocation or available and incorrectly update the pool values using the size of the extended partition. A subsequent refresh of the pool would reset the values appropriately. This patch modifies those checks in order to specifically not update the pool allocation and available for only the disk backend rather than be generic before and after checks. v1.2.16-rc1-15-g4880920 Hi John, I do not quite understand the Extended partition scenario you mentioned. After create a primary sdb1 (1GiB) and a extended sdb2 (2GiB) and refresh the pool. vol-list has a 1+2=3 GiB allocation as follow # virsh vol-list --details disk Name Path Type Capacity Allocation ---------------------------------------------- sdb1 /dev/sdb1 block 1.00 GiB 1.00 GiB sdb2 /dev/sdb2 block 2.00 GiB 2.00 GiB But pool-info has a 1GiB allocation as follow # virsh pool-info disk Name: disk UUID: 0d8c16e6-2be8-4795-9390-9c531eac0817 State: running Persistent: yes Autostart: no Capacity: 7.45 GiB Allocation: 1.00 GiB Available: 6.45 GiB So my question is: 1. Why isn't a extended partition's size counted in pool-info's allocation? 2. Is the vol-list's allocation sum (3GiB) and pool-info's allocation (1GiB) a conflict in above sample? > 1. Why isn't a extended partition's size counted in pool-info's allocation? The extended partition is where 'logical' types would use in order to carve out space. By themselves an 'extended' partition takes up very little space (see the output from 'parted -l'). When you allocate a 'logical' partition, you carve out space from the 'extended' partition and that's when the size gets charged to the pool. > 2. Is the vol-list's allocation sum (3GiB) and pool-info's allocation (1GiB) a conflict in above sample? I don't believe so - although I suppose we could avoid printing extended partitions since they're not really usable (different issue though). (In reply to John Ferlan from comment #8) > > 1. Why isn't a extended partition's size counted in pool-info's allocation? > > The extended partition is where 'logical' types would use in order to carve > out space. By themselves an 'extended' partition takes up very little space > (see the output from 'parted -l'). When you allocate a 'logical' partition, > you carve out space from the 'extended' partition and that's when the size > gets charged to the pool. > > > 2. Is the vol-list's allocation sum (3GiB) and pool-info's allocation (1GiB) a conflict in above sample? > > I don't believe so - although I suppose we could avoid printing extended > partitions since they're not really usable (different issue though). Thanks, I agree with you. And for the 2nd question I'll open a new minor issue to track. This issue will be verified. Verified,
on libvirt-1.2.16-1.el7.x86_64
1. prepare a disk pool
# virsh pool-info disk
Name: disk
UUID: 0d8c16e6-2be8-4795-9390-9c531eac0817
State: running
Persistent: yes
Autostart: no
Capacity: 7.45 GiB
Allocation: 1.00 GiB
Available: 6.45 GiB
# virsh pool-dumpxml disk
<pool type='disk'>
<name>disk</name>
<uuid>0d8c16e6-2be8-4795-9390-9c531eac0817</uuid>
<capacity unit='bytes'>7997583360</capacity>
<allocation unit='bytes'>1073741824</allocation>
<available unit='bytes'>6923809792</available>
<source>
<device path='/dev/sdb'>
<freeExtent start='31744' end='1048576'/>
<freeExtent start='1074790400' end='7997583360'/>
</device>
<format type='dos'/>
</source>
<target>
<path>/dev</path>
</target>
</pool>
2. create primary partition and check pool info
# virsh vol-create-as disk sdb2 --capacity 500M
Vol sdb2 created
# virsh pool-info disk
Name: disk
UUID: 0d8c16e6-2be8-4795-9390-9c531eac0817
State: running
Persistent: yes
Autostart: no
Capacity: 7.45 GiB
Allocation: 1.49 GiB
Available: 5.96 GiB <===== 500M took up as expect
3. create a extended partition for /dev/sdb (3G size)
# virsh vol-create-as --format extended disk sdb3 3G
Vol sdb3 created
# virsh pool-info disk
Name: disk
UUID: 0d8c16e6-2be8-4795-9390-9c531eac0817
State: running
Persistent: yes
Autostart: no
Capacity: 7.45 GiB
Allocation: 1.49 GiB
Available: 5.96 GiB <=== correct, extended partition doesn't take up space until logical partition generated.
4. create 2 logical partitions (one is 500MiB and one is 1GiB) with fdisk and check pool info
# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type:
p primary (2 primary, 1 extended, 1 free)
l logical (numbered from 5)
Select (default p): l
Adding logical partition 5
First sector (3126104-9418109, default 3127296):
Using default value 3127296
Last sector, +sectors or +size{K,M,G} (3127296-9418109, default 9418109): +500M
Partition 5 of type Linux and of size 500 MiB is set
Command (m for help): n
Partition type:
p primary (2 primary, 1 extended, 1 free)
l logical (numbered from 5)
Select (default p): l
Adding logical partition 6
First sector (3126104-9418109, default 4153344):
Using default value 4153344
Last sector, +sectors or +size{K,M,G} (4153344-9418109, default 9418109): +1G
Partition 6 of type Linux and of size 1 GiB is set
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
# virsh pool-refresh disk
Pool disk refreshed
# virsh pool-info disk
Name: disk
UUID: 0d8c16e6-2be8-4795-9390-9c531eac0817
State: running
Persistent: yes
Autostart: no
Capacity: 7.45 GiB
Allocation: 2.98 GiB
Available: 4.47 GiB <===== correct, 1.5GiB logical partition size took up
5. delete all above newly created partitions and check out the pool-info
5.1 # virsh vol-list disk --details
Name Path Type Capacity Allocation
------------------------------------------------
sdb1 /dev/sdb1 block 1.00 GiB 1.00 GiB
sdb2 /dev/sdb2 block 500.42 MiB 500.42 MiB
sdb3 /dev/sdb3 block 3.00 GiB 3.00 GiB
sdb5 /dev/sdb5 block 500.00 MiB 500.00 MiB
sdb6 /dev/sdb6 block 1.00 GiB 1.00 GiB
We'll delete sdb6 -> sdb3 (sdb5 should be deleted in this step cause it's a logical partition from sdb3 which is an extended partition) -> sdb2
5.2
# virsh vol-delete sdb6 disk
Vol sdb6 deleted
# virsh pool-refresh disk
Pool disk refreshed
# virsh pool-info disk
Name: disk
UUID: 0d8c16e6-2be8-4795-9390-9c531eac0817
State: running
Persistent: yes
Autostart: no
Capacity: 7.45 GiB
Allocation: 1.98 GiB
Available: 5.47 GiB <=== 1GiB freed as expect.
5.3
# virsh vol-delete sdb3 disk
Vol sdb3 deleted
# virsh pool-info disk
Name: disk
UUID: 0d8c16e6-2be8-4795-9390-9c531eac0817
State: running
Persistent: yes
Autostart: no
Capacity: 7.45 GiB
Allocation: 1.49 GiB
Available: 5.96 GiB <==== 500M freed as expect
5.4
# virsh vol-delete sdb2 disk
Vol sdb2 deleted
# virsh pool-info disk
Name: disk
UUID: 0d8c16e6-2be8-4795-9390-9c531eac0817
State: running
Persistent: yes
Autostart: no
Capacity: 7.45 GiB
Allocation: 1.00 GiB
Available: 6.45 GiB <===== 500M freed as expect
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-2202.html |
Description of problem: 'pool-info' get wrong allocation and available in disk pool Version-Release number of selected component (if applicable): libvirt-1.2.13-1.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1. prepare a disk pool <pool type="disk"> <name>disk</name> <source> <device path='/dev/sdb'/> <format type='dos'/> </source> <target> <path>/dev</path> </target> </pool> # virsh pool-info disk Name: disk UUID: b0b3c21e-31df-44d0-913e-f8f7bda080d1 State: running Persistent: yes Autostart: no Capacity: 996.22 MiB Allocation: 0.00 B Available: 996.19 MiB 2. create vol in disk pool # virsh vol-create-as disk sdb1 --capacity 500M Vol sdb1 created 3. check pool info # virsh pool-info disk Name: disk UUID: b0b3c21e-31df-44d0-913e-f8f7bda080d1 State: running Persistent: yes Autostart: no Capacity: 996.22 MiB Allocation: 1000.54 MiB Available: 16777216.00 TiB Actual results: In step 3, The value of both allocation and available are wrong The allocation got from 'pool-info' is always twice the specified capacity Expected results: Get right allocation and available value from 'pool-info' Additional info: Both allocation and available are right after pool-refresh