Description of problem:
Constant spinning busy cursor never finishes after aborted attempt to add a drive.
Version-Release number of selected component (if applicable):
How reproducible: seems 100%
Steps to Reproduce:
1. Create a new virtual machine
2. After creating a disk, cancel out of the progress window while space is allocated.
3. Now attempt to add new disk; new attempt never finishes
Disk should stop being created and be removed or new dialogs should not hang.
Dialogs hang, causing new disks not to appear in the machine description. Old disk wasn't aborted, I had to remove it manually - this was the whole point - I didn't want the disk in /var/libvirt/images and tried to change it.
Note: exiting and restarting virt-manager made the problem disappear. This python message was found on exit:
Unable to complete install: 'Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/addhardware.py", line 655, in add_device
File "/usr/share/virt-manager/virtManager/domain.py", line 1175, in add_device
File "/usr/share/virt-manager/virtManager/connection.py", line 711, in define_domain
AttributeError: 'NoneType' object has no attribute 'defineXML'
The issue here is that disk allocation isn't cancelable. There is some support in libvirt for running async jobs now, need to investigate if it will even help solve this problem.
THe current APIs only operate against virDomainPtr objects, but the design can trivially be replicated against virStoragePool/VolPtr objects
Since it's unlikely that the required libvirt support will appear in any RHEL5 release at this point, reassigning to RHEL6
Since RHEL 6.1 External Beta has begun, and this bug remains
unresolved, it has been rejected as it is not proposed as
exception or blocker.
Red Hat invites you to ask your support representative to
propose this request, if appropriate and relevant, in the
next release of Red Hat Enterprise Linux.
Since this is a libvirt RFC not filed by a customer, reassigning to upstream libvirt.
*** Bug 830676 has been marked as a duplicate of this bug. ***
*** This bug has been marked as a duplicate of bug 524205 ***