Hide Forgot
Description of problem: Hello, i generaly use kvm ( qemu-kvm) and virt-manager to manage my virtual machines, i have installed this on numerous hosts from no brand PC to HP DL 380 servers without problems this time i have a brand new serveur , just installed with a fresh RHEL6.1 this is a dell poweredge R610 - dual 6 cores CPU intel Xeon E5649 ( with hyperthread ) ( 24 procs ) - 64 Gb of RAM - 12 networks cards ( 4 standard + 2*4 addons ) - disk raid 2x140Gb SAS the RHEL6.1 is freshly installed on it ( without updates because i have no internet connectivity for this server ), the OS is running fine when i run virt-manager connected on localhost libvirtd i have the following error messages : - libvirt connection does not support virtual network management - libvirt connection does not support storage management - libvirt connection does not support interface management Version-Release number of selected component (if applicable): - RHEL 6.1 freshly installed - libvirt-0.8.7-18.el6.x86_64 - virt-manager-0.8.6-4.el6.noarch - qemu-kvm-0.12.1.2-2.160.el6.x86_64 How reproducible: always Steps to Reproduce: 1. install a RHEL6.1 2. launch virt-manager, connect to localhost 3. go to the virtual network or storage or interface tabs Actual results: get the message "libvirt connection does not support storage management" Expected results: the GUI to manage virtual network or storage or interface Additional info: the command "virsh pool-list" does not list the pool either i have already installed and used exactly the same versions on other hardware without problems so i guess it may come from specific of the poweredge R610 , maybe the number of cpu or network cards ? i stay at your disposal for any test you wish to do. Best regards
Hi, Daniel Could you set libvirt log level as debug, operate as what you did, and attach the log file then.. # vim /etc/libvirt/libvirtd.conf log_level = 1 log_output_file = "1:file:/tmp/libvirtd_debug.log"
Hello, I have found the problem : Virtualisation CPU instructions (VT) were not activated in the BIOS ! after activating, everything is working now. so you can close this bug, i am sorry for the inconvenience. Or you may transform this bug into the following : virt-manager should display a more explicit message ( activate VT CPU ) best regards
I'm still confused about this, the VT was not activated means you wouldn't able benifits from KVM, it's not the reason you can't manage the "virtual network", "storage", and "host interfaces".
I understand, but it is the only change i have made, and after reboot it was working ! I am reluctant to disable again VT and test with debug log-level because i have now some VM running and people working on it ! i should receive another identical server in a couple of weeks what i propose if you are still interested, is to try again with VT disabled, and i will send you the log at this time best regards
(In reply to comment #5) > i should receive another identical server in a couple of weeks > what i propose if you are still interested, is to try again with VT > disabled, and i will send you the log at this time > > best regards Yes, please do that. Thanks!
Hello, I have received another server identical to the first one. VT was also disabled in the bios, and i can reproduce exactly the same behavior. so i left VT disabled, put some log in libvirtd.conf according to your request : log_level = 1 log_output_file = "1:file:/tmp/libvirtd_debug.log" and relaunch libvirtd, and launch virt-manager to reproduce the "libvirt connection does not support storage management" message. unfortunately the /tmp/libvirtd_debug.log was even not created. if i created one, world writable, and try again, it was left empty. I am willing to help, so if you have another test in mind, il will try to proceed. PS: i do confirm that by just enabling VT, it fixes the problem Best Regards
Hi Daniel, What should go in libvirtd.conf is: log_level = 1 log_outputs="1:file:/var/log/libvirtd_debug" Also, you mentioned that you had a pool that you were trying to list. By default there are no pools created. virt-manager creates one when it starts up, so not seeing any pools may be normal if virt-manager hasn't been able to start successfully. You should be able to create one with: virsh pool-define bar.xml where bar.xml is: <pool type='dir'> <name>bar</name> <source> </source> <target> <path>/foo/bar/baz</path> </target> </pool> Dave
Cole, does virt-manager check for the presence of VT or anything like that?
Not WRT to pools/networks/interfaces. We check for that support by just trying to do a simple listDefinedPools and seeing if it errors. So, to the reporter, what's the output of virsh --connect qemu:///system net-list --all virsh --connect qemu:///system pool-list --all virsh --connect qemu:///system iface-list --all
Hi again, The results of the commands are : # virsh --connect qemu:///system net-list --all Nom État Démarrage automatique ----------------------------------------- default actif yes # virsh --connect qemu:///system pool-list --all Nom État Démarrage automatique ----------------------------------------- default inactif no VMINFRA01 actif yes VMREF01 actif yes # virsh --connect qemu:///system iface-list --all Nom État MAC Address -------------------------------------------- br0 actif 14:fe:b5:db:72:d9 em1 actif 14:fe:b5:db:72:d7 em3 actif 14:fe:b5:db:72:db lo actif 00:00:00:00:00:00 em4 inactif 14:fe:b5:db:72:dd p1p1 inactif 00:1b:21:ae:5e:a0 p1p2 inactif 00:1b:21:ae:5e:a1 p1p3 inactif 00:1b:21:ae:5e:a4 p1p4 inactif 00:1b:21:ae:5e:a5 p2p1 inactif 00:1b:21:ae:61:f8 p2p2 inactif 00:1b:21:ae:61:f9 p2p3 inactif 00:1b:21:ae:61:fc p2p4 inactif 00:1b:21:ae:61:fd if i reboot with VT disabled, now i get the same thing !! however, i do remember that the first time virsh pool-list --all give nothing, it should at least list the default pool ! so i can no longer reproduce the problem with the command virsh pool-list i guess because it has been initialized once. However, i can still reproduce the problem with virt-manager i have sent a screenshot in attachement : the only difference between the two hosts is the VT disabled And last point : the log file defined in libvirtd.conf is still empty. Best regards
Created attachment 515022 [details] The server with VT disabled cannot manage storage
Sorry, I may have led you astray. Are those commands run on the problem host, svgn0003? If not, please try virsh --connect qemu+ssh://root@svgn0003/system pool-list --all replace svgn0003 with whatever you used for the hostname. alternative is ssh to the problem machine, and run the commands I listed comment#10. If all that doesn't error, please SSH to the problem machine, stick the following code into a file named test.py: import libvirt conn = libvirt.open("qemu:///system") print conn.listInterfaces() then run python test.py and paste the output. thanks!
Hi Again, i have run the command directly on the server which has the problem. but please remember that now the pool-list command always works. ( VT enabled or not ) so i guess your test program will work too ! ( i will test it asap anyway ) the only way i can reproduce the problem now is graphically in virt-manager ( as shown in the attached screenshot ) Best Regards
Daniel, what I'm trying to determine is why virt-manager is reporting those issues, since AFAICT all it is doing behind the scenes is essentially 'virsh pool-list --all' and seeing if libvirt returns an error, not just empty list. So I'm very confused why those virsh commands don't seem to return error, but virt-manager is seeing an error. Can you run the python code that I recommend at the end of comment #13 on your problem server and show me the output? thanks
Cole, I'm going to transfer this BZ to you, but if it turns out to be libvirt, of course send it back. -Dave
Daniel, ping, can you provide the info requested in comment #15? Thanks
(In reply to comment #17) > Daniel, ping, can you provide the info requested in comment #15? Thanks Hello, sorry i had run the test long time ago but apparently forgot to post the result. so python test.py gives : ['br0', 'lo'] this is consistent with virsh iface-list which is also working now with or without VT on cpu so to summarize what i have experimented : 1) on a brand new server installed without VT enabled on CPU neither virt-manager nor virsh pool-list nor virsh iface-list display any informations 2) rebooting with VT enabled on CPU -> everything works 3) rebooting again with VT disabled : virsh pool-list and virsh iface-list ( and your test.py ) now display correct informations virt-manager is now the only one not working , as shown in the attached screenshots So for me it is not blocking , as you have to enable VT in order to run guests. i was just fooled by this server which disable VT by default.
Strange, if I disable VT on my box, I can't even initiate a connection in virt-manager. Since you found the root cause of your issue I'm just going to close this as WORKSFORME. Thanks for the help!