+++ This bug was initially created as a clone of Bug #742811 +++ Description of problem: rhn-profile-sync fails if libvirtd is not running on a virtualisation host. Fat RHEL hypervisoirs in RHEV fit this category. Version-Release number of selected component (if applicable): Spacewalk 1.5 How reproducible: Anyways Steps to Reproduce: 1. Build RHEV Env 2. Subscribe fat RHEL hypervisor to spacewalk 3. Run rhn-profile-sync Actual results: --- [root@hypervisor ~]# rhn-profile-sync Updating package profile... Updating hardware profile... Updating virtualization profile... libvir: RPC error : authentication failed: Failed to start SASL negotiation: -4 (SASL(-4): no mechanism available: No worthy mechs found) Warning: Could not retrieve virtualization information! libvirtd service needs to be running. You have new mail in /var/spool/mail/root [root@hypervisor ~]# --- Expected results: --- [root@xen07 07:26 ~]# rhn-profile-sync Updating package profile... Updating hardware profile... Updating virtualization profile... [root@xen07 07:26 ~]# --- Additional info: --- Additional comment from colin.coe on 2011-10-03 00:54:46 EDT --- I've done a smidge more poking at this. --- /usr/share/rhn/virtualization/support.py.orig 2011-10-03 12:17:06.182383109 +0800 +++ /usr/share/rhn/virtualization/support.py 2011-10-03 12:51:01.451383232 +0800 @@ -48,13 +48,7 @@ return True -vdsm_enabled = None -if not _check_status("libvirtd"): - # Only check for vdsm if libvirt is disabled. - # sometimes due to manual intervention both could be running - # in such case use libvirt as the system is now in - # un supported state. - vdsm_enabled = _check_status("vdsmd") +vdsm_enabled = _check_status("vdsmd") ############################################################################### @@ -98,6 +92,8 @@ domain_list = domains.values() domain_uuids = domains.keys() + print domain_list + if not vdsm_enabled: # We need this only for libvirt domain_dir = DomainDirectory() # We need this only for libvirt domain_dir = DomainDirectory() This patch slightly changes the way vdsm is looked for. When you look at a RHEV (fat RHEL) hypervisor (at least in RHEV 3.0 BETA), you see both vdsmd and libvirtd running. Given this, I think that the check should be on vdsmd, not on libvirtd. The second mod merely prints those virtual nodes on the host that are found by rhn-profile-sync. Interesting, on my only RHEV fat RHEL hypervisor, I see: --- [root@benvir4p virtualization]# rhn-profile-sync Updating package profile... Updating hardware profile... Updating virtualization profile... [{'name': 'benupd1p', 'virt_type': 'fully_virtualized', 'state': 'running', 'vcpus': '4', 'memory_size': 4194304, 'uuid': '8e4ffa4a7aa84da1a2d3c7cd61857ce0'}, {'name': 'benwah1p', 'virt_type': 'fully_virtualized', 'state': 'running', 'vcpus': '1', 'memory_size': 2097152, 'uuid': 'd7bc4696dc9246548f6d09d489f6eb3d'}, {'name': 'benwas1p', 'virt_type': 'fully_virtualized', 'state': 'running', 'vcpus': '2', 'memory_size': 4194304, 'uuid': '63a412dc124941dcb9c242ceb346812f'}, {'name': 'benpxy1p', 'virt_type': 'fully_virtualized', 'state': 'running', 'vcpus': '1', 'memory_size': 2097152, 'uuid': 'be101eb34150417d9b1a11c2fe300da9'}, {'name': 'benmon1p', 'virt_type': 'fully_virtualized', 'state': 'running', 'vcpus': '1', 'memory_size': 2097152, 'uuid': '8b0e7eeb23974d58970d43b01b7b8933'}, {'name': 'benlah1p', 'virt_type': 'fully_virtualized', 'state': 'running', 'vcpus': '1', 'memory_size': 2097152, 'uuid': 'ea00b884d8f242c88b5f374c095af568'}, {'name': 'bencit1p', 'virt_type': 'fully_virtualized', 'state': 'running', 'vcpus': '1', 'memory_size': 2097152, 'uuid': 'a4f92c143cf04754b2cfb969313c300a'}] [root@benvir4p virtualization]# --- So the guests are being seen, but are not showing up in the webUI. CC --- Additional comment from mzazrivec on 2011-10-04 05:21:16 EDT --- Thank you for the patch, applied in: spacewalk.git master: 8d2df4f1d3a722c95c54d1cf7f2b4b28680d6dbf
satellite.git SATELLITE-5.4: c45f747832a9b45318d8f386792086152f71bdf8
Technical note added. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. New Contents: Cause: New design of VDSM in RHEV-3. Consequence: rhn-virtualization package installed in a RHEV-3 environment, when deciding the correct method for polling the host for available guests would not correctly detect a RHEV host. As a result, the host would not report the running guests to the parent (RHN / RHN Satellite). Fix: Correct detection. Result: rhn-virtualization on a RHEV-3 host correctly reports running guests to the RHN / RHN Satellite parent.
Technical note updated. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. Diffed Contents: @@ -1,7 +1 @@ -Cause: New design of VDSM in RHEV-3. +Due to a new design of the Virtual Desktop Server Manager (VDSM) in version 3 of the Red Hat Enterprise Virtualization platform, the rhn-virtualization packages installed in this environment failed to detect the Red Hat Enterprise Virtualization host when determining which method to use to poll the host for available guests. This prevented the host from reporting running guests to the parent (Red Hat Network or RHN Satellite). With this update, the detection algorithm has been adapted to ensure that rhn-virtualization reports the running guests as expected in this scenario.- -Consequence: rhn-virtualization package installed in a RHEV-3 environment, when deciding the correct method for polling the host for available guests would not correctly detect a RHEV host. As a result, the host would not report the running guests to the parent (RHN / RHN Satellite). - -Fix: Correct detection. - -Result: rhn-virtualization on a RHEV-3 host correctly reports running guests to the RHN / RHN Satellite parent.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2011-1417.html