Red Hat Bugzilla – Bug 426324
libvirt mis-detects hypervisor version for NUMA apis
Last modified: 2009-12-14 16:16:20 EST
Description of problem:
The RHEL5 Xen 3.1.0 hypervisor contains back-ports of the NUMA capabilities from
Xen 3.2.0. The SYSCTL hypercall version in Xen 3.1.0 is '3', while the version
in Xen 3.2.0 is '4' (or later).
When deciding whether to try the NUMA 'freecell' operation, libvirt sees the
hypercall version '3' and immediately exits without even trying. To work with
the backport, it needs to unconditionally try the hypercall and gracefull deal
with it failing.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
Created attachment 290088 [details]
Patch to unconditionally try to get NUMA info
Currently when run on an new hypervisor it shows:
# virsh freecell
libvir: Remote error : Connection refused
libvir: warning : Failed to find the network: Is the daemon running ?
libvir: Xen error : failed Xen syscall xenHypervisorNodeGetCellsFreeMemory:
unsupported in sys interface < 4 0
The expected output is:
# virsh freecell
0: 129261568 kB
The patch just attached makes this work
CF, bug 235850 and bug 426321 for the kernel-xen and xen changes required to
test this issue.
This is also required to support NUMA topology via the 'virsh capbilities'
command for same reasons.
Dev acking the clone.
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release. Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products. This request is not yet committed for inclusion in an Update
libvirt-0.3.3-3.el5 has been rebuilt in dist-5E-qu-candidate with the fix,
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.