Bug 1447962 - The size info of Pool cannot refresh automatically
Summary: The size info of Pool cannot refresh automatically
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: Virtualization Tools
Classification: Community
Component: virt-manager
Version: unspecified
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Pavel Hrdina
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 636027
Blocks: 1473046
TreeView+ depends on / blocked
 
Reported: 2017-05-04 10:25 UTC by Yuandong Liu
Modified: 2019-06-16 23:45 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-16 23:45:36 UTC
Embargoed:


Attachments (Terms of Use)

Description Yuandong Liu 2017-05-04 10:25:47 UTC
Description of problem:
When adding a new storage pool to virt-manager, the size info of Pool cannot refresh automatically.

Version-Release number of selected component (if applicable):
virt-manager-1.4.1-3.el7.noarch

How reproducible:
100%

Steps to Reproduce:
1. Make sure an volume existed and the allocation unit is big enough to observe the size info.
# virsh vol-dumpxml --pool default test.img
<volume type='file'>
  <name>test.img</name>
  <key>/var/lib/libvirt/images/test.img</key>
  <source>
  </source>
  <capacity unit='bytes'>8589934592</capacity>
  <allocation unit='bytes'>8589934592</allocation>
  <target>
    <path>/var/lib/libvirt/images/test.img</path>
    <format type='raw'/>
    <permissions>
      <mode>0600</mode>
      <owner>107</owner>
      <group>107</group>
      <label>system_u:object_r:svirt_image_t:s0:c298,c891</label>
    </permissions>
    <timestamps>
      <atime>1434608835.157843853</atime>
      <mtime>1434608755.299489950</mtime>
      <ctime>1434608755.299489950</ctime>
    </timestamps>
  </target>
</volume>
2. Launch virt-manager, Click Edit->Connection Details->Storage, Click the default pool which the volume file is in.
3. Note down the Size info of the default pool.
4. Click the volume file and delete it from the pool.


Actual results:
The size info does't change.

Expected results:
The size info should be changed to correct value.

Additional info:
The size info will be changed to correct value after restart the pool or relaunch virt-manager. This bug occurred before, please refer to Bug 1233531.

Comment 2 Pavel Hrdina 2017-10-05 12:37:51 UTC
Moving to upstream.  We could fix it partially in virt-manager to refresh the storage pool when we add/delete a storage volume but there would be still an issue with updates not issued by virt-manager, for example via virsh or another instance of virt-manager.  In order to have proper support to refresh the storage pool information we need lifecycle events for storage pools and storage volumes.

Comment 3 capsicumw 2018-07-05 19:28:49 UTC
Similar issue here. Virtual Machine Manager version 1.4.0 running in Debian stable with backport kernel 4.16

I created an LVM volume group and a 71GiB logical volume, mounted the LV to the default ./images directory and refreshed the default pool, created a few guest images, after this VMM->connection details is showing 1.36G free/ 68.26G in use. So far it is as expected. 

Then I expanded the LV with $ lvextend -L +40G VG/name-of-volume
So now the LV is 111G (double checked with several tools). virt-manager still shows 1.36G/68.26G, I have stopped and started the pool, remounted the LV to the directory, refreshed contents of pool, and full closed and restarted virt-manager.

I also have an "LVM volume group" added as a pool of raw storage, this pool does recognize changes to the size of an LV. So the issue seems specific to "Filesystem Directory" pools.

I also just tried umount the LV from the directory and refreshed content of the pool, size did change in connection details to match the base filesystem. So I mounted the LV again, refreshed content in connection details and it is right back to 1.36G free/68.26G used

Comment 4 capsicumw 2018-07-05 20:40:50 UTC
Please excuse me, I had forgotten to resize the LV filesystem on the host. In other words I extended the logical block device but did not 
` $ resize2fs /dev/mapper/LV `
This solved my issue.

Comment 5 Cole Robinson 2019-06-16 23:45:36 UTC
Fixed upstream now:

commit 337e84083f7bf476ca873311afdf529549dd781b (HEAD -> master)
Author: Cole Robinson <crobinso>
Date:   Sun Jun 16 19:41:28 2019 -0400

    storagepool: Force refresh XML on refresh signal


Note You need to log in before you can comment on or make changes to this bug.