Hide Forgot
Created attachment 495596 [details] snapshot of libvirt locked, must ctrl+alt+esc and kill (all windows closed) Description of problem: With this issue, storage pools cannot be used. It is similar to "Bug 630440 - virt-manager gui locks/freezes when using a symlink in a storage pool path" but with F15 beta, and worse because even the default storage pool does not work. Version-Release number of selected component (if applicable): qemu-kvm.x86_64 2:0.14.0-7.fc15 @fedora kernel 2.6.38.3-18.fc15.x86_64 libvirt.x86_64 0.8.8-4.fc15 @fedora libvirt-client.x86_64 0.8.8-4.fc15 @fedora libvirt-python.x86_64 0.8.8-4.fc15 @fedora How reproducible: Steps to Reproduce: 1. create a new vm 2. choose managed storage 3. select new... Actual results: see attached pic, frozen gui Expected results: Additional info: selinux is permissive
Tried again and the problem is still there. But it only happens when crating a NEW vm. The default storage poll works ok if adding a disk to an existing vm. kernel 2.6.38.5-24.fc15.x86_64 libvirt.x86_64 0.8.8-4.fc15 @fedora libvirt-client.x86_64 0.8.8-4.fc15 @fedora libvirt-python.x86_64 0.8.8-4.fc15 @fedora python-virtinst.noarch 0.500.6-2.fc15 @fedora virt-manager.noarch 0.8.7-4.fc15 @updates-testing virt-viewer.x86_64 0.3.1-1.fc15 @fedora gpxe-roms-qemu.noarch 1.0.1-4.fc15 @fedora qemu-common.x86_64 2:0.14.0-7.fc15 @fedora qemu-img.x86_64 2:0.14.0-7.fc15 @fedora qemu-kvm.x86_64 2:0.14.0-7.fc15 @fedora qemu-system-x86.x86_64 2:0.14.0-7.fc15 @fedora
Sometimes, both the default and my created storage pool works ok, and the issue seems to be gone. Sometimes, it is even impossible to create a new vm unless NOT using "Select managed or other existing storage" because it is not possible to create a new virtual disk due to the issue on both default and custom storage pools.
New discovery: Instead of killing the frozen window, simple press ESC. It will return to the previous windows. Looks like an 'invisible' window opened in the limbo, since that 'window' cannot be seen nor interacted, only ESC is possible. Killing is annoying because it kills the whole gui-stuff. I cannot find what triggers it, as stated in previous post, there are days it does not happen. .xession-errors did not shown anything at all for both root and normal user. kernel 2.6.38.6-27.fc15.x86_64
Storage pools: default = /var/lib/libvirt/images (root:root) * untouched, default since release reinstall. vmpool_data1 = /data1/vm/user/virt-manager (root:root) * was user:user, tried with qemu:qemu and finally set it like default No change, when it happens, it don't want to go.
Also i can: * right click "localhost:qemu" > storage > new volume > WORKS. (workaround, then manually select it when creating a new guest). * select guest, add hardware > storage > new volume > WORKS. Then return to new vm and try to create a new volume in any storage pool from there and the issue manifests.
What's the output of: ls -l /var/lib/libvirt/images virt-manager --debug (when reproducing the issue)
ok, i am in the middle of a full reinstall and filesystem relayout since 2011-07-10 :-) because i bought an ssd and switched from 2 disks w/ mirrored /home to a 4 disks + ssd. and i recreated all filesystems. i am still copying data to the correct locations. After i finish relocating data (a couple of days, i think) i will try to reproduce it and will post the output of the requested info. thanks in advance.
Ok, i mostly finished. I still need to set permissions to a lot of things, but after various days of copying i am glad i finished (copying data). Keep in mind that i recreated all filesystems, so some permissions may be wrong (if needed i will retest later when i solved these) I launched virt-manager with debug as a normal user, and created a test vm in the default storage pool (which is now a raid0 with 512k stripe size). I intend to have to storage pool (fast-> raid0 and safe-> raid1). -------- $ virt-manager --debug 2011-07-15 11:55:22,807 (virt-manager:175): Application startup 2011-07-15 11:55:22,807 (virt-manager:363): Launched as: /usr/share/virt-manager/virt-manager.py --debug 2011-07-15 11:55:22,808 (virt-manager:364): GTK version: (2, 24, 4) 2011-07-15 11:55:22,981 (engine:338): About to connect to uris ['qemu:///system'] 2011-07-15 11:55:23,056 (engine:464): window counter incremented to 1 2011-07-15 11:55:23,071 (connection:905): Scheduling background open thread for qemu:///system 2011-07-15 11:55:23,072 (connection:1065): Background thread is running 2011-07-15 11:55:27,814 (connection:1093): Background open thread complete, scheduling notify 2011-07-15 11:55:27,815 (connection:1098): Notifying open result 2011-07-15 11:55:27,899 (connection:1105): qemu:///system capabilities: <capabilities> <host> <uuid>0046001e-8c00-01c5-e947-00248c382b0c</uuid> <cpu> <arch>x86_64</arch> <model>Opteron_G3</model> <vendor>AMD</vendor> <topology sockets='1' cores='3' threads='1'/> <feature name='wdt'/> <feature name='skinit'/> <feature name='osvw'/> <feature name='3dnowprefetch'/> <feature name='cr8legacy'/> <feature name='extapic'/> <feature name='cmp_legacy'/> <feature name='3dnow'/> <feature name='3dnowext'/> <feature name='pdpe1gb'/> <feature name='fxsr_opt'/> <feature name='mmxext'/> <feature name='ht'/> <feature name='vme'/> </cpu> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <cpus num='3'> <cpu id='0'/> <cpu id='1'/> <cpu id='2'/> </cpus> </cell> </cells> </topology> <secmodel> <model>selinux</model> <doi>0</doi> </secmodel> </host> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/bin/qemu</emulator> <machine>pc-0.14</machine> <machine canonical='pc-0.14'>pc</machine> <machine>fedora-13</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/bin/qemu-kvm</emulator> <machine>pc-0.14</machine> <machine canonical='pc-0.14'>pc</machine> <machine>fedora-13</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> </domain> </arch> <features> <cpuselection/> <deviceboot/> <pae/> <nonpae/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/bin/qemu-system-x86_64</emulator> <machine>pc-0.14</machine> <machine canonical='pc-0.14'>pc</machine> <machine>fedora-13</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/bin/qemu-kvm</emulator> <machine>pc-0.14</machine> <machine canonical='pc-0.14'>pc</machine> <machine>fedora-13</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> </capabilities> 2011-07-15 11:55:28,143 (connection:205): Using libvirt API for netdev enumeration 2011-07-15 11:55:28,146 (connection:244): Using libvirt API for mediadev enumeration 2011-07-15 11:55:31,336 (create:743): Guest type set to os_type=hvm, arch=x86_64, dom_type=kvm 2011-07-15 11:55:39,442 (config:661): get_default_directory(media): returning /data1/iso/centos/centos-5.6/CentOS-5.6-x86_64-bin-DVD 2011-07-15 11:55:41,684 (config:669): set_default_directory(media): saving /data1/iso/centos/centos-5.6/CentOS-5.6-x86_64-bin-DVD 2011-07-15 11:55:47,802 (DistroInstaller:125): DistroInstaller location is a local file/path: /data1/iso/centos/centos-5.6/CentOS-5.6-x86_64-bin-DVD/CentOS-5.6-x86_64-bin-DVD-1of2.iso -------- ok, it froze like usual. # ls -la /var/lib/libvirt/images/ total 24 drwxr-xr-x. 3 root root 4096 Jul 12 10:30 . drwxr-xr-x. 9 root root 4096 Jul 15 10:58 .. drwx------. 2 root root 16384 Jul 12 10:30 lost+found # ls -laZ /var/lib/libvirt/images/ drwxr-xr-x. root root system_u:object_r:file_t:s0 . drwxr-xr-x. root root system_u:object_r:virt_var_lib_t:s0 .. drwx------. root root system_u:object_r:file_t:s0 lost+found The /var/lib/libvirt/images/ filesystem is empty/new --------- Note: SELinux is PERMISSIVE (at least for now, because a full relabel is pending due to the fact i moved a lot of data in order to change the disk layout/raid scheme... ) Permissions may be wrong.
hmm, I can't tell from your previous comment. are you still seeing the problem after reinstalling your machine?
Yes, after reinstalling and recreating all filesystems except /home, the problems with the default storage pool persists. I have created another pool, /data1/libvirt-data drwxr-xr-x. root root unconfined_u:object_r:file_t:s0 /data1/libvirt-data/ which seems to work ok.
Today tried virtmanager on a laptop (an intel laptop) and found the problem there as well... The issue is not so specific as i tought, very different hw and the same problem.
Found something that barely resembles a workaround... * launch virt-manager. Create a new vm. * name: a, forward. * select use "iso image", then browse * select "vm-storage-safe" pool, press "new volume" The windows apears for a fraction of a second and goes to some "background/invisible" realm. * Hit ESC, (the virt-manager is not locked, just the window is being displayed in another dimension). * repeat: press "new volume" * PRESS "ctr+alt+tab", it will show that the window "new storage volume" is now there. (Before it was not even listed as an existing window in KDE) * Hit ESC, repeat "new volume" , "new storage volume" window does appear normally. In fact, past this point it is repaired until virt-manager is closed and relaunched. So, DO NOT CLOSE virt-manager, that is the workaround... On each boot, do that and leave virt-manager running. Additional Info: libvirt.x86_64 0.8.8-7.fc15 @updates libvirt-client.x86_64 0.8.8-7.fc15 @updates libvirt-python.x86_64 0.8.8-7.fc15 @updates python-virtinst.noarch 0.500.6-2.fc15 @fedora virt-manager.noarch 0.8.7-6.fc15 @updates virt-viewer.x86_64 0.3.1-1.fc15 @fedora kernel 2.6.40.6-0.fc15.x86_64 the system is up to date.
I have the same issue with virt-manager locking up while reading storage pools. Every time virt-manager locks up I see udevadm settle running in the background. Any way to figure out why is udevadm so slow? This happens of both Fedora 14 and 15 with all the updates applied.
I'd like to confirm this issue. Hitting "Escape" does allow me to try again, with success. I also have a Centos host, which has virt-manager 0.8.6, and can offer some observsations: If I run virt-manager locally on the centos host (gnome) I do not see this issue. If I run 'ssh -X centos-hostname virt-manager' on the workstation (kde), I do see this issue. I've also verified -XA and -XAY do not change behaviour. If I run 'virt-manager' on the workstation (kde), and create a local qemu guest, I see this issue. If I run 'virt-mananager' on the workstation (kde), add a rsa-key authenticated connection to the centos host, and create a guest on the remote host, I do see this issue. I'm going to try another DE, its starting to look like KDE is the common denominator here.
I tried this in GNOME and XFCE, with the expected results. To further narrow down the issue, I started a bare X session and performed a few tests: xinit --:1 vt8; metacity&; su -c 'virt-manager'; #test for bug, am able to add volumes as expected mutter --replace; #test for bug, am able to add volumes as expected xfwm4 --replace; #test for bug, am able to add volumes as expected kwin --replace; #test for bug, first occurrence of 'new volume' dialog box is 'hidden', can hit #escape while 'hidden' box is active/selected, subsequent attempts perform as #expected. I can try and poke further, but I'm not at all familiar with KDE debugging tools, so it might be a task best left for the initiated.
Ahh, all this KDE and invisible window talk makes me think this was related to something else I just fixed. Can someone try this fix for me: sudo gedit /usr/share/virt-manager/vmm-create-vol.glade delete the fifth or sixth line that reads: <property name="visible">True</property> And retry the freezing? What was happening is that the the dialog was disappearing in KDE due to a small bug, but the storage browse dialog was set to not allow input while the 'create volume' wizard was showing, but it never got the message that the window disappeared.
I launched virt-manager, reproduced the issue. Closed virt-manager, edited "/usr/share/virt-manager/vmm-create-vol.glade" and deleted the line "<property name="visible">True</property>" Then launched virt-manager again and could not reproduce the issue. It seems that removing that line FIXED the issue. Thanks for the info. Package versions: libvirt.x86_64 0.8.8-7.fc15 @updates libvirt-client.x86_64 0.8.8-7.fc15 @updates libvirt-python.x86_64 0.8.8-7.fc15 @updates python-virtinst.noarch 0.500.6-2.fc15 @fedora virt-manager.noarch 0.8.7-6.fc15 @updates virt-viewer.x86_64 0.3.1-1.fc15 @fedora kernel is: 2.6.41.10-3.fc15.x86_64
Cool, glad to hear. I'm duping this bug to the one already in POST. Thanks for all the info in this wild goose chase. *** This bug has been marked as a duplicate of bug 749928 ***