Bug 615754
Summary: | After restarting libvirtd, "virsh vcpuinfo" doesn't work for the the guests which were running before restarting the daemon | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 5 | Reporter: | Mark Wu <dwu> |
Component: | libvirt | Assignee: | Daniel Veillard <veillard> |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 5.5 | CC: | eblake, jentrena, jialiu, mjenner, mzhan, sputhenp, tao, virt-maint, xen-maint |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | libvirt-0.8.2-1.el5 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2011-01-13 23:14:13 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Mark Wu
2010-07-18 13:26:05 UTC
Only nvcpupids > 0, the vcpuinfo can be reported in qemud <snip> qemudDomainGetVcpus(virDomainPtr dom, virVcpuInfoPtr info, int maxinfo, unsigned char *cpumaps, int maplen) { ... /* Clamp to actual number of vcpus */ if (maxinfo > vm->nvcpupids) maxinfo = vm->nvcpupids; if (maxinfo >= 1) { if (info != NULL) { memset(info, 0, sizeof(*info) * maxinfo); for (i = 0 ; i < maxinfo ; i++) { info[i].number = i; info[i].state = VIR_VCPU_RUNNING; if (vm->vcpupids != NULL && qemudGetProcessInfo(&(info[i].cpuTime), </snip> But the vcpu pid is not persistent across reboot since it's logged in the status file(/var/run/libvirt/qemu/node1.xml). So in function qemudReconnectVMs() which is called to get the status of already running guests during libvirtd start, it can't get the value of "nvcpupids" and "pid". In libvirt-0.7.7-5.fc13.rpm, this problem has been fixed. Log vcpu pid in status file static int qemuDomainObjPrivateXMLFormat(virBufferPtr buf, void *data) { ... if (priv->nvcpupids) { int i; virBufferAddLit(buf, " <vcpus>\n"); for (i = 0 ; i < priv->nvcpupids ; i++) { virBufferVSprintf(buf, " <vcpu pid='%d'/>\n", priv->vcpupids[i]); } virBufferAddLit(buf, " </vcpus>\n"); } return 0; } And parse it when reconnect to the running domain in qemuDomainObjPrivateXMLParse Fixed in libvirt-0.8.2-1.el5 Verified this bug with on RHEL5u6 Server X86_64 KVM, and PASSED. 1. Start a guest, and check vcpu info # virsh list --all Id Name State ---------------------------------- 1 rhel5u5 running # virsh vcpuinfo rhel5u5 VCPU: 0 CPU: 3 State: running CPU time: 17.6s CPU Affinity: yyyy VCPU: 1 CPU: 1 State: running CPU time: 11.0s CPU Affinity: yyyy 2. Restart libvirtd service. # service libvirtd restart Stopping libvirtd daemon: [ OK ] Starting libvirtd daemon: [ OK ] 3. Re-check vcpu info # virsh vcpuinfo rhel5u5 VCPU: 0 CPU: 1 State: running CPU time: 17.7s CPU Affinity: yyyy VCPU: 1 CPU: 1 State: running CPU time: 11.0s CPU Affinity: yyyy Verified with Passed in below environments according to steps in comment 5: -RHEL5.6-Server-x86_64-KVM -RHEL5.6-Server-x86_64-Xen -RHEL5.6-Client-i386-Xen -RHEL5.6-Server-ia64-Xen kernel-xen-2.6.18-228.el5 xen-3.0.3-117.el5 kvm-qemu-img-83-205.el5 kernel-2.6.18-228.el5 libvirt-0.8.2-8.el5 An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHEA-2011-0060.html |