Bug 241303 - Unable to open the vms although the vms are running fine
Summary: Unable to open the vms although the vms are running fine
Alias: None
Product: Fedora
Classification: Fedora
Component: xen (Show other bugs)
(Show other bugs)
Version: rawhide
Hardware: x86_64 Linux
Target Milestone: ---
Assignee: Daniel Berrange
QA Contact: Martin Jenner
Whiteboard: bzcl34nup
Depends On:
TreeView+ depends on / blocked
Reported: 2007-05-24 21:55 UTC by Srihari Vijayaraghavan
Modified: 2008-05-07 01:49 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2008-05-07 01:49:06 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
Fix handling of file descriptors on VNC client disconnect (2.56 KB, patch)
2007-05-25 13:01 UTC, Daniel Berrange
no flags Details | Diff

Description Srihari Vijayaraghavan 2007-05-24 21:55:47 UTC
Description of problem:
Unable to open the vms although the vms are running fine. Virtual Machine
Manager stops reponding. Terminating is then the only option.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Start Virtual Machine Manager (VMM)
2. Start a few vms - qemu with hvm enabled vms; observe that the console of
individual vms are working fine
3. Exit VMM; start VMM; Now try opening one of the vm that was started in the
above step; observe VMM doesn't respond anymore; terminating it is the only way.

Actual results:
VMM just hangs without responding.

Expected results:
VMM to open the console of individual vm successfully.

Additional info:
on AMD 5600, with hvm support.

Comment 1 Srihari Vijayaraghavan 2007-05-25 08:00:11 UTC
When VMM is unable to open the console of the vm in question, then, only then,
the vm's qemu process starts consuming more cpu than usual (say from 1% CPU
utilisation it'd often jump to 10-20%, but not too much).

After that happens, VMM will never recover, forcing me to the forceful
termination of it. Weird.

Whether VMM is working or not, the individual vms themselves are quite unaware
of the VMM's problem & continue to function normally (tested over, say an SSH
session to the Linuxy/Unixy vms).

Of course, it'd be nice to have a reachable/working console to each vm, no
matter how many times VMM is restarted.

Comment 2 Daniel Berrange 2007-05-25 12:59:37 UTC
Looking at the Xen codebase, it appears they are missing a data corruption
bugfix in the VNC server which will (randomly) hit after a VNC client
disconnects. I'm almost certain you're hitting this, since the behaviour you
describe with high CPU load & hangs upon subsequent connects match the symptoms
of the bug.

Comment 3 Daniel Berrange 2007-05-25 13:01:34 UTC
Created attachment 155448 [details]
Fix handling of file descriptors on VNC client disconnect

This is the patch applied to upstream QEMU. Hopefully it will work with QEMU
0.8.0 without too much further work.

Comment 4 Srihari Vijayaraghavan 2007-05-25 13:36:37 UTC
Sounds good. If I may request you to provide me the rpm version of the packages
involved, and if you do :-), I'm quite keen to have it tested.

(Fedora 7 test 4 system with up to date updates has qemu-0.9.0-2.fc7. Would the
above fix be applicable there also?)

Thanks for a quick response & analysis of the problem. I really appreciate that.

Comment 5 Daniel Berrange 2007-05-25 13:42:41 UTC
The 'qemu' RPM is not actually used by  Xen at all - Xen has forked the QEMU
code and maintains its own private copy. I'll be pushing out this fix in an
updated RPM asap.

Comment 6 Srihari Vijayaraghavan 2007-05-25 23:44:33 UTC
Sorry I'm unable to understand whether the updates you are referring to are
going to really address my problem due to my own ignorance. Sorry if this is an
FAQ: I have only KVM based qemu vms (ie, only fully virtualised vms and no xen
paravirtualised) in the system; in my case, do you think the updates (xen/qemu
??) you are talking about are going to help?

$ rpm -qa|egrep -i 'xen|qemu|kvm|kernel|vnc|virt'


Comment 7 Srihari Vijayaraghavan 2007-05-29 11:31:00 UTC
In VMM, I'm able to reproduce the problem only against the guest with ID 1, I
think. Or it's quite easy to reproduce against guest ID 1 & extremely hard
against others.

Comment 8 Red Hat Bugzilla 2007-07-25 00:07:56 UTC
change QA contact

Comment 9 Bug Zapper 2008-04-04 00:54:35 UTC
Based on the date this bug was created, it appears to have been reported
against rawhide during the development of a Fedora release that is no
longer maintained. In order to refocus our efforts as a project we are
flagging all of the open bugs for releases which are no longer
maintained. If this bug remains in NEEDINFO thirty (30) days from now,
we will automatically close it.

If you can reproduce this bug in a maintained Fedora version (7, 8, or
rawhide), please change this bug to the respective version and change
the status to ASSIGNED. (If you're unable to change the bug's version
or status, add a comment to the bug and someone will change it for you.)

Thanks for your help, and we apologize again that we haven't handled
these issues to this point.

The process we're following is outlined here:

We will be following the process here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping to ensure this
doesn't happen again.

Comment 10 Bug Zapper 2008-05-07 01:49:02 UTC
This bug has been in NEEDINFO for more than 30 days since feedback was
first requested. As a result we are closing it.

If you can reproduce this bug in the future against a maintained Fedora
version please feel free to reopen it against that version.

The process we're following is outlined here:

Note You need to log in before you can comment on or make changes to this bug.