Bug 615754

Summary: After restarting libvirtd, "virsh vcpuinfo" doesn't work for the the guests which were running before restarting the daemon
Product: Red Hat Enterprise Linux 5 Reporter: Mark Wu <dwu>
Component: libvirtAssignee: Daniel Veillard <veillard>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 5.5CC: eblake, jentrena, jialiu, mjenner, mzhan, sputhenp, tao, virt-maint, xen-maint
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: libvirt-0.8.2-1.el5 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2011-01-13 23:14:13 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Mark Wu 2010-07-18 13:26:05 UTC
Description of problem:
The command "virsh vcpuinfo domN" just print a blank line if domN was started before libvirtd restarted

Version-Release number of selected component (if applicable):
libvirt-0.6.3-33.el5.rpm

How reproducible:
100%

Steps to Reproduce:
1. start virtual machine "node1"
2. service libvirtd restart
3. virsh vcpuinfo node1

  
Actual results:
[root@dhcp-129-138 ~]# virsh vcpuinfo node1

[root@dhcp-129-138 ~]# 

Expected results:
"virsh vcpuinfo" still works fine after libvirtd restarting


Additional info:

Comment 1 Mark Wu 2010-07-18 14:06:28 UTC
Only nvcpupids > 0, the vcpuinfo can be reported in qemud 
<snip>
qemudDomainGetVcpus(virDomainPtr dom,
                    virVcpuInfoPtr info,
                    int maxinfo,
                    unsigned char *cpumaps,
                    int maplen) {
...

    /* Clamp to actual number of vcpus */
    if (maxinfo > vm->nvcpupids)
        maxinfo = vm->nvcpupids;

    if (maxinfo >= 1) {
        if (info != NULL) {
            memset(info, 0, sizeof(*info) * maxinfo);
            for (i = 0 ; i < maxinfo ; i++) {
                info[i].number = i;
                info[i].state = VIR_VCPU_RUNNING;

                if (vm->vcpupids != NULL &&
                    qemudGetProcessInfo(&(info[i].cpuTime),
</snip>

But the vcpu pid is not persistent across reboot since it's logged in the status file(/var/run/libvirt/qemu/node1.xml).  So in function qemudReconnectVMs() which is called to get the status of already running guests during libvirtd start, it can't get the value of "nvcpupids" and "pid". 

In libvirt-0.7.7-5.fc13.rpm,  this problem has been fixed. 
Log vcpu pid in status file
static int qemuDomainObjPrivateXMLFormat(virBufferPtr buf, void *data)
{
   ...
   if (priv->nvcpupids) {
        int i;
        virBufferAddLit(buf, "  <vcpus>\n");
        for (i = 0 ; i < priv->nvcpupids ; i++) {
            virBufferVSprintf(buf, "    <vcpu pid='%d'/>\n", priv->vcpupids[i]);
        }
        virBufferAddLit(buf, "  </vcpus>\n");
    }

    return 0;
}

And parse it when reconnect to the running domain in qemuDomainObjPrivateXMLParse

Comment 3 Jiri Denemark 2010-09-02 11:58:53 UTC
Fixed in libvirt-0.8.2-1.el5

Comment 5 Johnny Liu 2010-10-20 06:27:44 UTC
Verified this bug with on RHEL5u6 Server X86_64 KVM, and PASSED.

1. Start a guest, and check vcpu info
# virsh list --all
 Id Name                 State
----------------------------------
  1 rhel5u5              running

# virsh vcpuinfo rhel5u5
VCPU:           0
CPU:            3
State:          running
CPU time:       17.6s
CPU Affinity:   yyyy

VCPU:           1
CPU:            1
State:          running
CPU time:       11.0s
CPU Affinity:   yyyy

2. Restart libvirtd service.
# service libvirtd restart
Stopping libvirtd daemon:                                  [  OK  ]
Starting libvirtd daemon:                                  [  OK  ]

3. Re-check vcpu info
# virsh vcpuinfo rhel5u5
VCPU:           0
CPU:            1
State:          running
CPU time:       17.7s
CPU Affinity:   yyyy

VCPU:           1
CPU:            1
State:          running
CPU time:       11.0s
CPU Affinity:   yyyy

Comment 6 Min Zhan 2010-10-25 10:04:51 UTC
Verified with Passed in below environments according to steps in comment 5:
-RHEL5.6-Server-x86_64-KVM 
-RHEL5.6-Server-x86_64-Xen
-RHEL5.6-Client-i386-Xen
-RHEL5.6-Server-ia64-Xen

kernel-xen-2.6.18-228.el5
xen-3.0.3-117.el5
kvm-qemu-img-83-205.el5
kernel-2.6.18-228.el5
libvirt-0.8.2-8.el5

Comment 8 errata-xmlrpc 2011-01-13 23:14:13 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHEA-2011-0060.html