Bug 508357 - errors will occur when the volume adding to the storage pool more than 19 times
errors will occur when the volume adding to the storage pool more than 19 times
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: virt-manager (Show other bugs)
5.4
All Linux
low Severity medium
: rc
: ---
Assigned To: Cole Robinson
Virtualization Bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2009-06-26 13:36 EDT by Mark Xie
Modified: 2010-03-30 04:50 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2010-03-30 04:50:53 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Backport of migrate dialog from upstream (71.20 KB, text/plain)
2009-12-15 14:22 EST, Cole Robinson
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2010:0281 normal SHIPPED_LIVE virt-manager bug fix update 2010-03-29 09:59:22 EDT

  None (edit)
Description Mark Xie 2009-06-26 13:36:05 EDT
Description of problem:
Error will occur after we did the volume adding action more than 19 times.

Version-Release number of selected component (if applicable):
virt-manager-0.6.1-4.el5
libvirt-0.6.3-6.el5
libvirt-cim-0.5.5-2.el5
libvirt-python-0.6.3-6.el5
kmod-cmirror-xen-0.1.21-14.el5
kernel-xen-2.6.18-152.el5
kernel-xen-devel-2.6.18-152.el5
xen-libs-3.0.3-87.el5
kmod-gfs-xen-0.1.33-2.el5
xen-3.0.3-87.el5
xen-libs-3.0.3-87.el5
kmod-gnbd-xen-0.1.5-2.el5

How reproducible:
100%

Steps to Reproduce:
1. Open virt-manager and select a connection, then select "Edit -> Host Details" to open the "Host Details" window.
2. Select "Storage" tab, add a new "storage pool" by click the "+" at the bottom left side.
3. Make the new added storage pool selected, then try to add new volume by click the "New Volume" button at the middle of the bottom. 
4, Add 19 volumes by repeat step 3.
5, Try to add the 20th wolume
  
Actual results:
An error will occur when trying to add the 20th volume, and the popup error will happen whenever trying to add a new volume until the virt-manager have closed and re-open again.

Expected results:
It's expected the volumes add normally 


Error messages:
Error creating vol: this function is not supported by the hypervisor: virStoragePoolLookupByName

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/createvol.py", line 178, in _async_vol_create
    newpool = newconn.storagePoolLookupByName(self.parent_pool.get_name())
  File "/usr/lib64/python2.4/site-packages/libvirt.py", line 1260, in storagePoolLookupByName
    if ret is None:raise libvirtError('virStoragePoolLookupByName() failed', conn=self)
libvirtError: this function is not supported by the hypervisor: virStoragePoolLookupByName


Additional info:

1) We can't add new storage pool by virt-manager
2) We can't add new volume in any storage pool by virt-manager
3) We can't add new storage pool by virsh
4) We can't add new volume in any storage pool by virsh
5) Even the 19 times volume adding action all failed (by adding the not supported volume format, for example), it still course this issue.
6) After close the virt-manager, it's need another 19 times adding action to show this issue again.
7) Whenever open the virsh when the virt-manager comes with this issue, the virsh will duplicate this issue too. But on the other hand, even the virsh is opening, if we close the issue virt-manager and restart it, it will take our 19 times to see this issue again in virt-manager.
8) Add volume 20 times by virsh, everything still works ok.
9) Deleting more than 19 volumes by virt-manager, it's still works ok, we will not meet with this strange issue.
Comment 1 Mark Xie 2009-06-26 13:41:09 EDT
Sorry, some description not clear, it should be,

1) **After the error occur,**  We can't add new storage pool by virt-manager
2) **After the error occur,**  We can't add new volume in any storage pool by virt-manager
3) **After the error occur,**  We can't add new storage pool by virsh
4) **After the error occur,**  We can't add new volume in any storage pool by virsh
Comment 2 Cole Robinson 2009-06-26 14:27:39 EDT
If this messes up virsh, there is at least a bug at the libvirt level, so reassigning there.
Comment 3 Dave Allan 2009-06-26 15:07:38 EDT
I reproduced the problem, and I think it's a resource leak in virt-manager. 
I'm seeing the following logged into /var/log/messages:

Jun 26 22:55:22 virtlab103 libvirtd: 22:55:22.516: error :
qemudDispatchServer:1218 : Too many active clients (20), dropping connection

I'm reassigning to Cole, with his permission.
Comment 4 Cole Robinson 2009-06-29 13:21:45 EDT
I don't think this is a 5.4 candidate.

You should be able to reproduce by creating 20 storage pools, volumes, or even VMs. virt-manager had to open separate connections for things like volume creation so we could run them in a separate thread. Newer libvirt doesn't require this, but virt-manager hasn't switched over yet, and the change touches too many vital pieces to risk it for this late in the game.

And since the solution is just to restart the app, it's a pretty easy workaround.
Comment 6 Cole Robinson 2009-12-15 14:22:35 EST
Created attachment 378590 [details]
Backport of migrate dialog from upstream

This upstream backport (which is needed for several other bugs) actually pulls in the needed changes to fix this bug.
Comment 7 Cole Robinson 2009-12-15 14:42:15 EST
Fix built in virt-manager-0_6_1-9_el5
Comment 12 errata-xmlrpc 2010-03-30 04:50:53 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2010-0281.html

Note You need to log in before you can comment on or make changes to this bug.