Description of problem: The command "virsh vcpuinfo domN" just print a blank line if domN was started before libvirtd restarted Version-Release number of selected component (if applicable): libvirt-0.6.3-33.el5.rpm How reproducible: 100% Steps to Reproduce: 1. start virtual machine "node1" 2. service libvirtd restart 3. virsh vcpuinfo node1 Actual results: [root@dhcp-129-138 ~]# virsh vcpuinfo node1 [root@dhcp-129-138 ~]# Expected results: "virsh vcpuinfo" still works fine after libvirtd restarting Additional info:
Only nvcpupids > 0, the vcpuinfo can be reported in qemud <snip> qemudDomainGetVcpus(virDomainPtr dom, virVcpuInfoPtr info, int maxinfo, unsigned char *cpumaps, int maplen) { ... /* Clamp to actual number of vcpus */ if (maxinfo > vm->nvcpupids) maxinfo = vm->nvcpupids; if (maxinfo >= 1) { if (info != NULL) { memset(info, 0, sizeof(*info) * maxinfo); for (i = 0 ; i < maxinfo ; i++) { info[i].number = i; info[i].state = VIR_VCPU_RUNNING; if (vm->vcpupids != NULL && qemudGetProcessInfo(&(info[i].cpuTime), </snip> But the vcpu pid is not persistent across reboot since it's logged in the status file(/var/run/libvirt/qemu/node1.xml). So in function qemudReconnectVMs() which is called to get the status of already running guests during libvirtd start, it can't get the value of "nvcpupids" and "pid". In libvirt-0.7.7-5.fc13.rpm, this problem has been fixed. Log vcpu pid in status file static int qemuDomainObjPrivateXMLFormat(virBufferPtr buf, void *data) { ... if (priv->nvcpupids) { int i; virBufferAddLit(buf, " <vcpus>\n"); for (i = 0 ; i < priv->nvcpupids ; i++) { virBufferVSprintf(buf, " <vcpu pid='%d'/>\n", priv->vcpupids[i]); } virBufferAddLit(buf, " </vcpus>\n"); } return 0; } And parse it when reconnect to the running domain in qemuDomainObjPrivateXMLParse
Fixed in libvirt-0.8.2-1.el5
Verified this bug with on RHEL5u6 Server X86_64 KVM, and PASSED. 1. Start a guest, and check vcpu info # virsh list --all Id Name State ---------------------------------- 1 rhel5u5 running # virsh vcpuinfo rhel5u5 VCPU: 0 CPU: 3 State: running CPU time: 17.6s CPU Affinity: yyyy VCPU: 1 CPU: 1 State: running CPU time: 11.0s CPU Affinity: yyyy 2. Restart libvirtd service. # service libvirtd restart Stopping libvirtd daemon: [ OK ] Starting libvirtd daemon: [ OK ] 3. Re-check vcpu info # virsh vcpuinfo rhel5u5 VCPU: 0 CPU: 1 State: running CPU time: 17.7s CPU Affinity: yyyy VCPU: 1 CPU: 1 State: running CPU time: 11.0s CPU Affinity: yyyy
Verified with Passed in below environments according to steps in comment 5: -RHEL5.6-Server-x86_64-KVM -RHEL5.6-Server-x86_64-Xen -RHEL5.6-Client-i386-Xen -RHEL5.6-Server-ia64-Xen kernel-xen-2.6.18-228.el5 xen-3.0.3-117.el5 kvm-qemu-img-83-205.el5 kernel-2.6.18-228.el5 libvirt-0.8.2-8.el5
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHEA-2011-0060.html