Bug 796092 - [virt-manager] Disk I/O graphic doesn't update after shutting down and starting a guest again
Summary: [virt-manager] Disk I/O graphic doesn't update after shutting down and starti...
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: Virtualization Tools
Classification: Community
Component: virt-manager
Version: unspecified
Hardware: x86_64
OS: Linux
low
low
Target Milestone: ---
Assignee: Cole Robinson
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-02-22 09:49 UTC by Geyang Kong
Modified: 2014-07-06 19:31 UTC (History)
10 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2014-02-01 13:16:36 UTC
Embargoed:


Attachments (Terms of Use)
Virt-manager's log (5.35 KB, text/plain)
2012-02-22 09:50 UTC, Geyang Kong
no flags Details
Screenshot (670.27 KB, image/png)
2012-02-22 09:51 UTC, Geyang Kong
no flags Details
xml file of guest (2.86 KB, text/plain)
2012-04-05 07:11 UTC, Daisy Wu
no flags Details

Description Geyang Kong 2012-02-22 09:49:31 UTC
Description of problem:
  Disk I/O graphic doesn't update after shutting down and start a guest again.

Version-Release number of selected component (if applicable):
libvirt-0.9.10-1.el6.x86_64
python-virtinst-0.600.0-7.el6.noarch
qemu-kvm-0.12.1.2-2.229.el6.x86_64
virt-manager-0.9.0-10.el6.x86_64
Linux version 2.6.32-220.el6.x86_64

How reproducible:
100%

Steps to Reproduce:

1. Start virt-manager and make sure there is a guest.

2. Click "Edit->Preferences->Stats" and check the Disk I/O checkbox. 

3. Click "View->Graph->Disk I/O".

4. Start the guest and make sure you can see polygonal line in Disk I/O graphic.

5. Shutdown the guest and wait until you can see the end of the polygonal line(Refer to Screenshot.png in attachments).

6. Wait about 5 seconds after step 5 and start the guest again.

7. Check the Disk I/O graphic

Actual results:

1. There is nothing in Disk I/O graphic.

Expected results:

1. The Disk I/O graphic should update dynamically.

Additional info:

1. After step 5, don't wait too long, you'd better start to calculate time immediately when you see the end of the polygonal line.

2. There is nothing in the libvirtd.log file, so I didn't attach it.

Comment 1 Geyang Kong 2012-02-22 09:50:25 UTC
Created attachment 564897 [details]
Virt-manager's log

Comment 2 Geyang Kong 2012-02-22 09:51:00 UTC
Created attachment 564898 [details]
Screenshot

Comment 3 Cole Robinson 2012-03-02 00:10:16 UTC
[Wed, 22 Feb 2012 16:13:19 virt-manager 3403] DEBUG (engine:1021) Starting vm 'T1'.
[Wed, 22 Feb 2012 16:13:52 virt-manager 3403] DEBUG (engine:991) Destroying vm 'T1'.
[Wed, 22 Feb 2012 16:13:53 virt-manager 3403] ERROR (domain:1528) Error reading disk stats for 'T1' dev 'vda': Requested operation is not valid: domain is not running
[Wed, 22 Feb 2012 16:13:53 virt-manager 3403] DEBUG (domain:1529) Adding vda to skip list.
[Wed, 22 Feb 2012 16:13:59 virt-manager 3403] DEBUG (engine:1021) Starting vm 'T1'.
[Wed, 22 Feb 2012 16:14:24 virt-manager 3403] DEBUG (engine:550) Exiting app normally.

Here's the root cause, we try polling the VM while it is shutting down, the poll returns an error, we blacklist the device thinking its broken.

Fixed upstream:

http://git.fedorahosted.org/git?p=virt-manager.git;a=commit;h=0782a10b2980945aaadf0e3e020f8ded66308ae0

Comment 4 Cole Robinson 2012-04-02 23:25:12 UTC
Fixed in virt-manager-0.9.0-11.el6

Comment 7 Daisy Wu 2012-04-05 07:10:14 UTC
This bug is not fixed with:
virt-manager-0.9.0-11.el6.x86_64
libvirt-0.9.10-9.el6.x86_64
python-virtinst-0.600.0-8.el6.noarch
qemu-kvm-0.12.1.2-2.269.el6.x86_64

Steps:
1. Prepared rhel6.3 guest.
2. Start virt-manager.
#virt-manager --debug
3. Click "Edit->Preferences->Stats" and check the Disk I/O checkbox. 
4. Click "View->Graph->Disk I/O".
5. Start the guest and make sure you can see polygonal line in Disk I/O
graphic.
6. Shutdown the guest and wait until you can see the end of the polygonal
line.
7. Wait about 5 seconds after step 5 and start the guest again.
8. Check the Disk I/O graphic
9. There is nothing in Disk I/O graphic.
Related debug info as follows:
2012-04-05 14:39:27,920 (engine:1021): Starting vm 'rhel6.3'.
2012-04-05 14:39:46,193 (engine:426): Tick is slow, not running at requested rate.
2012-04-05 14:40:07,136 (engine:991): Destroying vm 'rhel6.3'.
2012-04-05 14:40:07,983 (domain:1531): Error reading disk stats for 'rhel6.3' dev 'vda': Unable to read from monitor: Connection reset by peer
2012-04-05 14:40:07,985 (domain:1533): Adding vda to skip list
2012-04-05 14:40:07,988 (domain:1531): Error reading disk stats for 'rhel6.3' dev 'hdc': Requested operation is not valid: domain is not running
2012-04-05 14:40:07,989 (domain:1536): Aren't running, don't add to skiplist
2012-04-05 14:40:09,029 (domain:1531): Error reading disk stats for 'rhel6.3' dev 'hdc': Requested operation is not valid: domain is not running
2012-04-05 14:40:09,029 (domain:1533): Adding hdc to skip list
2012-04-05 14:40:18,618 (engine:1021): Starting vm 'rhel6.3'.

Changed the status to ASSIGNED

Comment 8 Daisy Wu 2012-04-05 07:11:54 UTC
Created attachment 575300 [details]
xml file of guest

Comment 10 Cole Robinson 2012-04-16 14:45:04 UTC
Hmm, unfortunately this stuff is all just racey. My patch makes things a bit
better but it needs a better solution. Which will have to wait till 6.4
unfortunately

Comment 14 Cole Robinson 2014-02-01 13:16:36 UTC
Fixed upstream now:

commit 17c0ae3a3c3c4f760e85703b063b53d2979f4020
Author: Cole Robinson <crobinso>
Date:   Sat Feb 1 08:15:24 2014 -0500

    domain: Reset net/disk skip lists when VM is inactive (bz 796092)
    
    Racey shutdown can mean we try to poll disk stats at a time when
    it won't work. Resetting the lists give things a chance to work
    correctly when the VM is rebooted.


Note You need to log in before you can comment on or make changes to this bug.