RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 742055 - virt-manager loses connection if it receives dominfo error while guest is shutting down
Summary: virt-manager loses connection if it receives dominfo error while guest is shu...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: virt-manager
Version: 6.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Cole Robinson
QA Contact: Virtualization Bugs
URL:
Whiteboard:
: 757947 (view as bug list)
Depends On:
Blocks: 727267 735357
TreeView+ depends on / blocked
 
Reported: 2011-09-28 21:26 UTC by Eric Blake
Modified: 2013-01-17 07:01 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Occasionally, if virt-manager tried to read a domain's information while that domain was shutting down, virt-manager would receive an error from libvirt, and incorrectly close the libvirt connection in the UI. virt-manager has been changed to expect errors in this case and not incorrectly close the libvirt connection.
Clone Of:
Environment:
Last Closed: 2012-06-20 12:39:04 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2012:0785 0 normal SHIPPED_LIVE virt-manager bug fix and enhancement update 2012-06-19 20:34:46 UTC

Description Eric Blake 2011-09-28 21:26:35 UTC
Description of problem:
I'm running virt-manager on a dual-screen setup, and had one of my two screens dedicated to one VM running in full-screen mode, while the other screen was running a second VM in windowed mode.  I initiated a shutdown of the guest in the fullscreen window, and when it exited, I got a traceback message, and virt-manager closed the connection to libvirtd (in turn losing the windowed VM in the other monitor).

Version-Release number of selected component (if applicable):
virt-manager-0.9.0-6.el6.x86_64

How reproducible:
Doesn't happen when the focus is not in the guest in full-screen mode, but appears to be pretty reliable if the focus is on the full-screen guest shutting down

Steps to Reproduce:
1. shutdown a guest running in full-screen mode, and while that guest has focus
2.
3.
  
Actual results:
traceback popped up, and virt-manager lost the connection and all windows associated with the connection

Error polling connection 'qemu:///system': Unable to read from monitor: Connection reset by peer

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/engine.py", line 440, in _tick
    conn.tick()
  File "/usr/share/virt-manager/virtManager/connection.py", line 1507, in tick
    vm.tick(now)
  File "/usr/share/virt-manager/virtManager/domain.py", line 1531, in tick
    info = self._backend.info()
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1406, in info
    if ret is None: raise libvirtError ('virDomainGetInfo() failed', dom=self)
libvirtError: Unable to read from monitor: Connection reset by peer


Expected results:
the guest that just shut down should pop out of fullscreen back into windowed mode with the typical "Guest not running" message, no other windows should be affected, and virt-manager should not lose the connection

Additional info:

Comment 1 Eric Blake 2011-09-28 21:30:52 UTC
In case it is relevant, I was also running:
qemu-kvm-0.12.1.2-2.193.el6.x86_64
libvirt-0.9.4-13.el6.x86_64

Comment 3 RHEL Program Management 2011-10-07 16:11:46 UTC
Since RHEL 6.2 External Beta has begun, and this bug remains
unresolved, it has been rejected as it is not proposed as
exception or blocker.

Red Hat invites you to ask your support representative to
propose this request, if appropriate and relevant, in the
next release of Red Hat Enterprise Linux.

Comment 4 Cole Robinson 2011-10-13 17:54:22 UTC
Is this 100% reproducible? Does it require the full screen setup you mention, or is it just specific to 2 running vms, or one fullscreen vm, etc?

Deferring to 6.3 for now

Comment 5 Eric Blake 2011-10-13 18:00:13 UTC
I haven't seen it in a few days, but I also haven't been running multiple VMs, so I'm not sure it is 100% reproducible.  I'll post more the next time it happens to me; deferring to 6.3 seems okay.

Comment 6 Laurent Léonard 2011-10-29 17:55:40 UTC
The issue doesn't seem to be related to fullscreen mode.

With Virt-manager, libvirt resets the connection when I shutdown a VM 
if the VNC window is not open until the VM is completely shutdown. I can reproduce the issue when there is one or more running guests.

I get the following error message:

Error polling connection 'qemu:///system': Unable to read from monitor: Connection reset by peer

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/engine.py", line 440, in _tick
    conn.tick()
  File "/usr/share/virt-manager/virtManager/connection.py", line 1507, in tick
    vm.tick(now)
  File "/usr/share/virt-manager/virtManager/domain.py", line 1526, in tick
    info = self._backend.info()
  File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1411, in info
    if ret is None: raise libvirtError ('virDomainGetInfo() failed', dom=self)
libvirtError: Unable to read from monitor: Connection reset by peer

I'm using Virt-manager 0.9.0 and libvirt 0.9.6.

Comment 8 Boris Derzhavets 2011-11-06 09:19:27 UTC
I can reproduce the issue exactly like described in Comment #6

Environment :- 
Qemu-kvm 0.15.1, Libvirt 0.9.6 , Virt-manager 0.9.0 on top of Ubuntu Oneiric.
Spice session initiated via Virt-Manager is connected via spicy.
After domain shutdown :-

 Error polling connection 'qemu:///system': Unable to read from monitor:
Connection reset by peer

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/engine.py", line 440, in _tick
    conn.tick()
  File "/usr/share/virt-manager/virtManager/connection.py", line 1507, in tick
    vm.tick(now)
  File "/usr/share/virt-manager/virtManager/domain.py", line 1531, in tick
    info = self._backend.info()
  File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1411, in info
    if ret is None: raise libvirtError ('virDomainGetInfo() failed', dom=self)
libvirtError: Unable to read from monitor: Connection reset by peer

Just one guest. Restart of libvirtd daemon may allow  2-3 (max) successful shutdowns.
Then :-
 Error polling connection 'qemu:///system': Unable to read from monitor:
Connection reset by peer

Comment 9 Boris Derzhavets 2011-11-06 09:28:08 UTC
> I can reproduce the issue exactly like described in Comment #6

Disregard this sentence. Same issue , but another environment.
When spicy window showing shutdown messages gets closed, error window pops up.

Comment 10 Cole Robinson 2011-12-09 22:16:30 UTC
There's another similar bug report (bug 757547), but not duping since this one has more info but the other one has partner info attached.

Comment 11 Cole Robinson 2012-01-29 16:45:06 UTC
*** Bug 757947 has been marked as a duplicate of this bug. ***

Comment 13 Cole Robinson 2012-02-01 20:03:31 UTC
Fixed in virt-manager-0.9.0-8.el6

Comment 15 Daisy Wu 2012-02-13 06:48:48 UTC
This bug can be reproduced with:
kernel-2.6.32-220.el6.x86_64
libvirt-0.9.4-14.el6.x86_64
virt-manager-0.9.0-6.el6.x86_6
python-virtinst-0.600.0-5.el6.noarch

Verified with:
kernel-2.6.32-220.el6.x86_64
libvirt-0.9.10-0rc2.el6.x86_64
python-virtinst-0.600.0-7.el6.noarch
virt-manager-0.9.0-9.el6.x86_64


Steps:
1. Prepared two guests (rhel6.2 and win7) with VNC graphics device.
2. Running all the guests.
3. Login the guest rhel6.2 and shutdown it by guest's os function, not using the virt-manager tool button.
4. Check the status of guests and virt-manager. All of them work well without error message. 
5. Changed the graphics device to Spice and do step2-step4. Guests and virt-manager work well without error message.

Changed the status to VERIFIED.

Comment 16 Cole Robinson 2012-06-12 15:40:12 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
Occasionally, if virt-manager tried to read a domain's information while that domain was shutting down, virt-manager would receive an error from libvirt, and incorrectly close the libvirt connection in the UI. virt-manager has been changed to expect errors in this case and not incorrectly close the libvirt connection.

Comment 18 errata-xmlrpc 2012-06-20 12:39:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2012-0785.html


Note You need to log in before you can comment on or make changes to this bug.