RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1446486 - The disk pool doesn't show automatically when it is created
Summary: The disk pool doesn't show automatically when it is created
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: virt-manager
Version: 7.4
Hardware: x86_64
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Pavel Hrdina
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-04-28 08:11 UTC by Yuandong Liu
Modified: 2018-04-10 11:42 UTC (History)
7 users (show)

Fixed In Version: virt-manager-1.4.3-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-10 11:40:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Virt-manager debug log (26.65 KB, text/plain)
2017-04-28 08:13 UTC, Yuandong Liu
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0726 0 None None None 2018-04-10 11:42:12 UTC

Description Yuandong Liu 2017-04-28 08:11:53 UTC
Description of problem:

In virt-manager, when add a disk type of storage pool to the connection, after finishing the storage pool creating, it doesn't show immediately in the left side. When exit the virt-manager and reopen it, check the storage pool, it shows on the left side.

Version-Release number of selected component (if applicable):

libvirt-3.2.0-3.el7.x86_64
ibvirt-python-3.2.0-1.el7.x86_64
qemu-kvm-rhev-2.8.0-6.el7.x86_64
kernel-3.10.0-648.el7.x86_64
virt-manager-1.4.1-2.el7.noarch

How reproducible:

100%

Steps to Reproduce:

1.Make sure there is a additional free storage device on the host.
    #dd if=/dev/zero of=/dev/sdb bs=1M count=10
2.Launch virt-manager: #virt-manager.
3.Click Edit->Host Details.
4.Click Storage tab on Connection Details dialogue.
5.Click Add pool button.
6.Fill out pool name and select the type 'disk', then click Forward button.
7.Fill out Target Path(/dev), source path(/dev/sdb), check build box.
8.Click Finish button.

Actual results:

As description.

Expected results:

The disk type of storage pool should show in the left side of the virt-manager connection storage after it was created, no need to exit the virt-manager and reopen it.

Comment 2 Yuandong Liu 2017-04-28 08:13:03 UTC
Created attachment 1274833 [details]
Virt-manager debug log

Comment 3 Cole Robinson 2017-04-28 15:28:59 UTC
Here's the relevant snippet:

[Fri, 28 Apr 2017 15:24:00 virt-manager 5136] DEBUG (storage:526) Creating storage pool 'yua-disk' with xml:
<pool type="disk">
  <name>yua-disk</name>
  <uuid>a7aba847-dd6a-454f-a354-08a1a6f258f9</uuid>
  <source>
    <device path="/dev/sdb"/>
  </source>
  <target>
    <path>/dev</path>
  </target>
</pool>

[Fri, 28 Apr 2017 15:24:00 virt-manager 5136] DEBUG (connection:782) storage pool lifecycle event: storage=yua-disk event=0 reason=0
[Fri, 28 Apr 2017 15:24:00 virt-manager 5136] DEBUG (libvirtobject:194) Error initializing libvirt state for <storagepool.vmmStoragePool object at 0x7fc2e41b4550 (virtManager+storagepool+vmmStoragePool at 0x1f033a0)>
Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 191, in init_libvirt_state
    self._init_libvirt_state()
  File "/usr/share/virt-manager/virtManager/storagepool.py", line 150, in _init_libvirt_state
    self.tick()
  File "/usr/share/virt-manager/virtManager/storagepool.py", line 147, in tick
    self._refresh_status()
  File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 234, in _refresh_status
    newstatus = self._get_backend_status()
  File "/usr/share/virt-manager/virtManager/storagepool.py", line 143, in _get_backend_status
    return self._backend_get_active()
  File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 251, in _backend_get_active
    return (bool(self._backend.isActive()) and
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3165, in isActive
    if ret == -1: raise libvirtError ('virStoragePoolIsActive() failed', pool=self)
libvirtError: Storage pool not found: no storage pool with matching uuid 'a7aba847-dd6a-454f-a354-08a1a6f258f9' (yua-disk)


So we define the pool XML, virt-manager receives a signal that a new pool showed up, we try to call isActive() on it, which fails saying 'pool not found'. Seems like a libvirt bug

Comment 4 Xiaodai Wang 2017-04-29 01:18:54 UTC
This backtrace displays because he didn't tick 'build' option and the pool creation failed. Is it a bug that must tick 'build' to create a disk pool?

Then he ticked the 'build' and created successfully. But the pool dir didn't display in virt-manager. You can see the next part logs below the backtrace.

Comment 5 Pavel Hrdina 2017-09-08 08:33:05 UTC
Upstream patch posted:

https://www.redhat.com/archives/virt-tools-list/2017-September/msg00058.html

Comment 6 Pavel Hrdina 2017-09-11 07:39:46 UTC
Upstream commit:

commit 12117ba148ec47eb2aa15e192c6026a2c3026ed1
Author: Pavel Hrdina <phrdina>
Date:   Fri Sep 8 09:36:58 2017 +0200

    connection: change blacklist from array to dict

Comment 8 zhoujunqin 2017-09-22 04:19:24 UTC
I can reproduce this bug with package:
virt-manager-1.4.1-7.el7.noarch

Then try to verify this bug with new build:
virt-manager-1.4.3-1.el7.noarch
virt-install-1.4.3-1.el7.noarch
libvirt-3.7.0-2.el7.x86_64
qemu-kvm-rhev-2.9.0-16.el7_4.8.x86_64

Steps:
1.Make sure there is a additional free storage device on the host, such as 'sdb', to clear your usb device by

#dd if=/dev/zero of=/dev/sdb bs=1M count=10

2.Launch virt-manager: 
#virt-manager

3.Click Edit->Connection Details.

4.Click Storage tab on Connection Details dialogue.

5.Click Add pool button.

6.Fill out pool name and select the type 'disk', then click Forward button.

7.Fill out Target Path(/dev), source path(/dev/sdb), check build box.

8.Click Finish button.

Result:
'Add a New Storage Pool' dialogue is closed, and the newly created pool is added to storage list.
So move this bug from ON_QA to VERIFIED, thanks.

Comment 11 errata-xmlrpc 2018-04-10 11:40:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0726


Note You need to log in before you can comment on or make changes to this bug.