Red Hat Bugzilla – Bug 508357
errors will occur when the volume adding to the storage pool more than 19 times
Last modified: 2010-03-30 04:50:53 EDT
Description of problem:
Error will occur after we did the volume adding action more than 19 times.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Open virt-manager and select a connection, then select "Edit -> Host Details" to open the "Host Details" window.
2. Select "Storage" tab, add a new "storage pool" by click the "+" at the bottom left side.
3. Make the new added storage pool selected, then try to add new volume by click the "New Volume" button at the middle of the bottom.
4, Add 19 volumes by repeat step 3.
5, Try to add the 20th wolume
An error will occur when trying to add the 20th volume, and the popup error will happen whenever trying to add a new volume until the virt-manager have closed and re-open again.
It's expected the volumes add normally
Error creating vol: this function is not supported by the hypervisor: virStoragePoolLookupByName
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/createvol.py", line 178, in _async_vol_create
newpool = newconn.storagePoolLookupByName(self.parent_pool.get_name())
File "/usr/lib64/python2.4/site-packages/libvirt.py", line 1260, in storagePoolLookupByName
if ret is None:raise libvirtError('virStoragePoolLookupByName() failed', conn=self)
libvirtError: this function is not supported by the hypervisor: virStoragePoolLookupByName
1) We can't add new storage pool by virt-manager
2) We can't add new volume in any storage pool by virt-manager
3) We can't add new storage pool by virsh
4) We can't add new volume in any storage pool by virsh
5) Even the 19 times volume adding action all failed (by adding the not supported volume format, for example), it still course this issue.
6) After close the virt-manager, it's need another 19 times adding action to show this issue again.
7) Whenever open the virsh when the virt-manager comes with this issue, the virsh will duplicate this issue too. But on the other hand, even the virsh is opening, if we close the issue virt-manager and restart it, it will take our 19 times to see this issue again in virt-manager.
8) Add volume 20 times by virsh, everything still works ok.
9) Deleting more than 19 volumes by virt-manager, it's still works ok, we will not meet with this strange issue.
Sorry, some description not clear, it should be,
1) **After the error occur,** We can't add new storage pool by virt-manager
2) **After the error occur,** We can't add new volume in any storage pool by virt-manager
3) **After the error occur,** We can't add new storage pool by virsh
4) **After the error occur,** We can't add new volume in any storage pool by virsh
If this messes up virsh, there is at least a bug at the libvirt level, so reassigning there.
I reproduced the problem, and I think it's a resource leak in virt-manager.
I'm seeing the following logged into /var/log/messages:
Jun 26 22:55:22 virtlab103 libvirtd: 22:55:22.516: error :
qemudDispatchServer:1218 : Too many active clients (20), dropping connection
I'm reassigning to Cole, with his permission.
I don't think this is a 5.4 candidate.
You should be able to reproduce by creating 20 storage pools, volumes, or even VMs. virt-manager had to open separate connections for things like volume creation so we could run them in a separate thread. Newer libvirt doesn't require this, but virt-manager hasn't switched over yet, and the change touches too many vital pieces to risk it for this late in the game.
And since the solution is just to restart the app, it's a pretty easy workaround.
Created attachment 378590 [details]
Backport of migrate dialog from upstream
This upstream backport (which is needed for several other bugs) actually pulls in the needed changes to fix this bug.
Fix built in virt-manager-0_6_1-9_el5
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.