Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.
Bug 241303 - Unable to open the vms although the vms are running fine
Unable to open the vms although the vms are running fine
Product: Fedora
Classification: Fedora
Component: xen (Show other bugs)
x86_64 Linux
medium Severity medium
: ---
: ---
Assigned To: Daniel Berrange
Martin Jenner
Depends On:
  Show dependency treegraph
Reported: 2007-05-24 17:55 EDT by Srihari Vijayaraghavan
Modified: 2008-05-06 21:49 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2008-05-06 21:49:06 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
Fix handling of file descriptors on VNC client disconnect (2.56 KB, patch)
2007-05-25 09:01 EDT, Daniel Berrange
no flags Details | Diff

  None (edit)
Description Srihari Vijayaraghavan 2007-05-24 17:55:47 EDT
Description of problem:
Unable to open the vms although the vms are running fine. Virtual Machine
Manager stops reponding. Terminating is then the only option.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Start Virtual Machine Manager (VMM)
2. Start a few vms - qemu with hvm enabled vms; observe that the console of
individual vms are working fine
3. Exit VMM; start VMM; Now try opening one of the vm that was started in the
above step; observe VMM doesn't respond anymore; terminating it is the only way.

Actual results:
VMM just hangs without responding.

Expected results:
VMM to open the console of individual vm successfully.

Additional info:
on AMD 5600, with hvm support.
Comment 1 Srihari Vijayaraghavan 2007-05-25 04:00:11 EDT
When VMM is unable to open the console of the vm in question, then, only then,
the vm's qemu process starts consuming more cpu than usual (say from 1% CPU
utilisation it'd often jump to 10-20%, but not too much).

After that happens, VMM will never recover, forcing me to the forceful
termination of it. Weird.

Whether VMM is working or not, the individual vms themselves are quite unaware
of the VMM's problem & continue to function normally (tested over, say an SSH
session to the Linuxy/Unixy vms).

Of course, it'd be nice to have a reachable/working console to each vm, no
matter how many times VMM is restarted.
Comment 2 Daniel Berrange 2007-05-25 08:59:37 EDT
Looking at the Xen codebase, it appears they are missing a data corruption
bugfix in the VNC server which will (randomly) hit after a VNC client
disconnects. I'm almost certain you're hitting this, since the behaviour you
describe with high CPU load & hangs upon subsequent connects match the symptoms
of the bug.
Comment 3 Daniel Berrange 2007-05-25 09:01:34 EDT
Created attachment 155448 [details]
Fix handling of file descriptors on VNC client disconnect

This is the patch applied to upstream QEMU. Hopefully it will work with QEMU
0.8.0 without too much further work.
Comment 4 Srihari Vijayaraghavan 2007-05-25 09:36:37 EDT
Sounds good. If I may request you to provide me the rpm version of the packages
involved, and if you do :-), I'm quite keen to have it tested.

(Fedora 7 test 4 system with up to date updates has qemu-0.9.0-2.fc7. Would the
above fix be applicable there also?)

Thanks for a quick response & analysis of the problem. I really appreciate that.
Comment 5 Daniel Berrange 2007-05-25 09:42:41 EDT
The 'qemu' RPM is not actually used by  Xen at all - Xen has forked the QEMU
code and maintains its own private copy. I'll be pushing out this fix in an
updated RPM asap.
Comment 6 Srihari Vijayaraghavan 2007-05-25 19:44:33 EDT
Sorry I'm unable to understand whether the updates you are referring to are
going to really address my problem due to my own ignorance. Sorry if this is an
FAQ: I have only KVM based qemu vms (ie, only fully virtualised vms and no xen
paravirtualised) in the system; in my case, do you think the updates (xen/qemu
??) you are talking about are going to help?

$ rpm -qa|egrep -i 'xen|qemu|kvm|kernel|vnc|virt'

Comment 7 Srihari Vijayaraghavan 2007-05-29 07:31:00 EDT
In VMM, I'm able to reproduce the problem only against the guest with ID 1, I
think. Or it's quite easy to reproduce against guest ID 1 & extremely hard
against others.
Comment 8 Red Hat Bugzilla 2007-07-24 20:07:56 EDT
change QA contact
Comment 9 Bug Zapper 2008-04-03 20:54:35 EDT
Based on the date this bug was created, it appears to have been reported
against rawhide during the development of a Fedora release that is no
longer maintained. In order to refocus our efforts as a project we are
flagging all of the open bugs for releases which are no longer
maintained. If this bug remains in NEEDINFO thirty (30) days from now,
we will automatically close it.

If you can reproduce this bug in a maintained Fedora version (7, 8, or
rawhide), please change this bug to the respective version and change
the status to ASSIGNED. (If you're unable to change the bug's version
or status, add a comment to the bug and someone will change it for you.)

Thanks for your help, and we apologize again that we haven't handled
these issues to this point.

The process we're following is outlined here:

We will be following the process here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping to ensure this
doesn't happen again.
Comment 10 Bug Zapper 2008-05-06 21:49:02 EDT
This bug has been in NEEDINFO for more than 30 days since feedback was
first requested. As a result we are closing it.

If you can reproduce this bug in the future against a maintained Fedora
version please feel free to reopen it against that version.

The process we're following is outlined here:

Note You need to log in before you can comment on or make changes to this bug.