Bug 244160

Summary: Possible to start with in-use tap device
Product: [Fedora] Fedora Reporter: Saori Fukuta <fukuta.saori>
Component: xenAssignee: Xen Maintainance List <xen-maint>
Status: CLOSED WONTFIX QA Contact:
Severity: medium Docs Contact:
Priority: low    
Version: 7CC: katzj, triage, xen-maint
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2008-06-17 01:35:16 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Saori Fukuta 2007-06-14 08:46:50 UTC
Description of problem:
  It is possible to start the guest domain even when the guest domain has
  the tap device that is in-use by another guest.

Version-Release number of selected component (if applicable):
  libvirt 0.2.3(revision 1.575)

  xen-libs-3.1.0-0.rc7.1.fc7
  xen-3.1.0-0.rc7.1.fc7
  xen-devel-3.1.0-0.rc7.1.fc7
  kernel-xen-2.6.20-2925.9.fc7

How reproducible:
  always

Steps to Reproduce:
  1. add the tap device to the inactive guest
  # virsh attach-device PV_RH5_11 <file>

  2. start the guest
  # virsh start PV_RH5_11
  Domain PV_RH5_11 started
  
Actual results:
  It is possible to start the guest domain.

Expected results:
  It is not possible to start the guest domain. 

Additional info:
  It is *not* possible to start the guest domain even when the guest domain has
  the vbd or file device that is in-use by another guest.
  
  in the case of vbd:
  # virsh start PV_RH5_11
  libvir: Xen Daemon error : POST operation failed: (xend.err 'Device 51744 
(vbd) could not be connected.\nFile /root/test_f.img is loopback-mounted 
through /dev/loop0,\nwhich is mounted in a guest domain,\nand so cannot be 
mounted now.')
  error: Failed to start domain PV_RH5_11

  in the case of file:
  [root@sydney update]# virsh start PV_RH5_11
  libvir: Xen Daemon error : POST operation failed: (xend.err 'Device xvda 
(51712, vbd) is already connected.')
  error: Failed to start domain PV_RH5_11

Comment 1 Daniel Veillard 2007-06-14 09:00:54 UTC
I'm not sure I understand.
If you expect the error to be caught by libvirt, I think it's the wrong place
to make the check. Only the hypervisor (Xen or whatever) can really tell if 
a device is shareable or not.
And your report seems to indicate that the hypervisor in those case actually
detect the conflict and it is reported back via virsh.

  So I do not understand your bug report, please clarify,

  thanks,

Daniel

Comment 2 Saori Fukuta 2007-06-14 09:28:51 UTC
I expect to make the check but I'm not sure where is the best place, because 
the virt-install is checking if the device is in-use or not.
So I would like to know where the check place is.
If we can not expect to make the check libvirt, I will get a new bugzilla ID 
as a Xen problem.

Thanks,

Saori

Comment 3 Daniel Veillard 2007-06-14 09:37:21 UTC
As I understand now, it's only for tap devices that the check seems to be missing.
I think it cannot and should not be done at the libvirt level, but in Xen itself.

  so I'm reassigning the bug to the xen package, and update the Summary,

   thanks,

Daniel

Comment 4 Daniel Berrangé 2007-06-14 11:11:06 UTC
This is a bug in the Xen hotplug scripts. 

In the /etc/xen/scripts/block  script it looks in XenStore for all vbd  devices
& checks their device number to see if there's any clashes. Unfortunately
blocktap device info lives in a diferent part of the tree in xenstore & also
does not have a physical device associated, so this whole checking scheme is broken.


Comment 5 Daniel Berrangé 2007-09-14 19:33:33 UTC
Need to backport this upstream changeset

changeset:   15737:bd8647a7b992
user:        kfraser
date:        Fri Aug 17 10:02:52 2007 +0100
files:       tools/examples/blktap tools/examples/block
tools/examples/block-common.sh
description:
Add sharing-check for blktap

Signed-off-by: Takanori Kasai <kasai.takanori.com>
Signed-off-by: Hirofumi Tsujimura <tsujimura.hirof.com>


Comment 6 Bug Zapper 2008-05-14 13:04:27 UTC
This message is a reminder that Fedora 7 is nearing the end of life. Approximately 30 (thirty) days from now Fedora will stop maintaining and issuing updates for Fedora 7. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as WONTFIX if it remains open with a Fedora 'version' of '7'.

Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version prior to Fedora 7's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that we may not be able to fix it before Fedora 7 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora please change the 'version' of this bug. If you are unable to change the version, please add a comment here and someone will do it for you.

Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. If possible, it is recommended that you try the newest available Fedora distribution to see if your bug still exists.

Please read the Release Notes for the newest Fedora distribution to make sure it will meet your needs:
http://docs.fedoraproject.org/release-notes/

The process we are following is described here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 7 Bug Zapper 2008-06-17 01:35:15 UTC
Fedora 7 changed to end-of-life (EOL) status on June 13, 2008. 
Fedora 7 is no longer maintained, which means that it will not 
receive any further security or bug fix updates. As a result we 
are closing this bug. 

If you can reproduce this bug against a currently maintained version 
of Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.