Bug 701394

Summary: [RFE] libvirt should report the elapsed time waiting for a response from a qemu process
Product: Red Hat Enterprise Linux 6 Reporter: Federico Simoncelli <fsimonce>
Component: libvirtAssignee: Jiri Denemark <jdenemar>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: high    
Version: 6.2CC: abaron, dallan, danken, dyuan, mzhan, nzhang, syeghiay
Target Milestone: rcKeywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: libvirt-0.9.3-1.el6 Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
: 701398 (view as bug list) Environment:
Last Closed: 2011-12-06 11:06:29 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 693512, 701398    

Description Federico Simoncelli 2011-05-02 19:33:40 UTC
Description of the feature:
Libvirt should have a method that reports if it is waiting for a response from a domain (eg: a qemu process) and the elapsed time.
The implementation could take advantage of the current domjobinfo command.

If it is possible we should also try to keep this information after a libvirt restart.
Eg: what is happening now if libvirt is restarted when it's waiting for a qemu response?

Comment 2 RHEL Program Management 2011-05-03 06:00:53 UTC
Since RHEL 6.1 External Beta has begun, and this bug remains
unresolved, it has been rejected as it is not proposed as
exception or blocker.

Red Hat invites you to ask your support representative to
propose this request, if appropriate and relevant, in the
next release of Red Hat Enterprise Linux.

Comment 3 Dave Allan 2011-05-17 14:36:24 UTC
It looks to me like this BZ should be closed as a dup of BZ 692663, but I'll wait until we have a little more concrete design there before I do so.

Comment 4 Dave Allan 2011-05-22 06:45:53 UTC
Jirka tells me these are separate pieces of code, and in any case we'll need to test them separately, so definitely not a duplicate.

Comment 5 Jiri Denemark 2011-05-31 11:18:09 UTC
I'm working on patches which will add a new API and I'll post them once 0.9.2 is out.

Comment 6 Jiri Denemark 2011-06-07 13:03:26 UTC
Patches sent upstream: https://www.redhat.com/archives/libvir-list/2011-June/msg00323.html

Comment 7 Jiri Denemark 2011-06-16 17:19:02 UTC
This is now implemented upstream by v0.9.2-108-g67cc825, v0.9.2-109-g6301ce5, v0.9.2-110-g559fcf8, v0.9.2-111-g5f1bbec:

commit 67cc825dda5e01af5698c30deab7eb5e14849694
Author: Jiri Denemark <jdenemar>
Date:   Tue May 24 11:28:50 2011 +0300

    Introduce virDomainGetControlInfo API
    
    The API can be used to query current state of an interface to VMM used
    to control a domain. In QEMU world this translates into monitor
    connection.

commit 6301ce52359514d574c37bafa84dcade219b295b
Author: Jiri Denemark <jdenemar>
Date:   Tue May 31 17:37:00 2011 +0200

    Wire protocol and remote driver for virDomainGetControlInfo

commit 559fcf8a24ea090a7cc51f7cd8e7468922d5c1d7
Author: Jiri Denemark <jdenemar>
Date:   Tue May 31 18:34:20 2011 +0200

    qemu: Implement virDomainGetControlInfo

commit 5f1bbecb7dd1bc47247d61aed02fb3d233893f0f
Author: Jiri Denemark <jdenemar>
Date:   Tue May 31 18:21:58 2011 +0200

    virsh: Add support for virDomainGetControlInfo

Comment 8 Nan Zhang 2011-07-06 03:02:34 UTC
Verified with vdsmtests vmTests.NonResponsiveVmTest.testMonitorDown, the case is passed. So the bug is fixed now.

Test builds:
libvirt-0.9.3-1.el6.x86_64
vdsm-4.9-80.el6.x86_64


	vmTests.NonResponsiveVmTest.testMonitorDown: 
		07:03:01 DEBUG   Loading environment data
		07:03:01 INFO    Starting to build environment
		07:03:01 DEBUG   Connecting host 'client1:http://10.66.85.203:54321'
		07:03:01 DEBUG   Connecting agent on 'client1'
		07:03:01 DEBUG   Making sure host 'client1' is clean
		07:03:01 DEBUG   Validate connecting host 'client1' to storage
		07:03:02 DEBUG   Connecting host 'client1' to storage
		07:03:03 DEBUG   Creating domains for pool 'spUUID1'
		07:03:03 DEBUG   Creating domain target for domain 'sdUUID1' on '36090a038d0f7f1d927d4d42c7867f25e'
		07:03:07 DEBUG   Creating storage domain 'sdUUID1:b155c10d-d9ac-4343-af94-de1b11772773' version 0
		07:03:17 DEBUG   Creating pool 'spUUID1:3ddff497-d019-4fa5-856a-deed91e646b9' with master domain 'b155c10d-d9ac-4343-af94-de1b11772773'
		07:03:22 DEBUG   Connecting pool 'spUUID1'
		07:03:23 DEBUG   Starting SPM for pool 'spUUID1'
		07:03:27 DEBUG   Activating domain 'sdUUID1'
		07:03:27 DEBUG   Creating image 'imgUUID1':76799fcf-1ee7-4f8a-aad8-d1b8fd69292e
		07:03:27 DEBUG   Creating volume c5d67cfa-67cc-43a8-90b3-1904aed11003 of image 76799fcf-1ee7-4f8a-aad8-d1b8fd69292e from parent 00000000-0000-0000-0000-000000000000
		07:03:31 DEBUG   Preparing vm 'vm1'
		07:03:31 DEBUG   Finished processing host 'client1'
		07:03:31 INFO    Finished building environment
		07:03:35 DEBUG   Vm a8798b63-2ed0-4344-b705-238b5ed55aeb is Powering up
		07:03:35 DEBUG   Vm a8798b63-2ed0-4344-b705-238b5ed55aeb is Up
		07:03:35 DEBUG   Waiting for Vm a8798b63-2ed0-4344-b705-238b5ed55aeb monitorResponse to become -1
		07:04:40 INFO    Trying to clean 1 hosts
		07:04:40 INFO    Starting clean up
		07:04:40 DEBUG   Releasing pools
		07:04:40 DEBUG   Cleaning pool '3ddff497-d019-4fa5-856a-deed91e646b9'
		07:04:40 DEBUG   Deactivating domain 'b155c10d-d9ac-4343-af94-de1b11772773'
		07:04:43 DEBUG   Deleting domains
		07:04:46 DEBUG   Deleting domain 'b155c10d-d9ac-4343-af94-de1b11772773'
		07:04:46 DEBUG   Formatting domain 'b155c10d-d9ac-4343-af94-de1b11772773'
		07:04:50 DEBUG   Destroying domain targets
		07:04:53 DEBUG   Disconnecting from storage
		07:05:00 DEBUG   Verify file domains are totally cleaned on http://10.66.85.203:54321 
		07:05:00 INFO    Finished clean up
	Result: OK

Comment 10 Nan Zhang 2011-07-07 02:55:15 UTC
According to comment 8, move this to VERIFIED.

Comment 11 errata-xmlrpc 2011-12-06 11:06:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2011-1513.html