Red Hat Bugzilla – Bug 445476
Unable to create a PV guest with > 3 disks
Last modified: 2009-12-14 15:59:40 EST
Description of problem:
I have a guest with 4 disks configured
disk = [ "file:/var/lib/xen/images/rhel5pv.img,xvda,w",
And it often fail to start up giving a hotplug error
# virsh start rhel5pv
libvir: Xen Daemon error : POST operation failed: (xend.err 'Device 51760 (tap)
could not be connected. xenstore-read backend/tap/3/51904/params failed.')
error: Failed to start domain rhel5pv
If I drop it down to only 3 disks, it'll start fairly reliably. If I increase it
to have 8 disks, it'll never start.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Configure a guest with 10 disks
2. Start it
Fails to boot
Removing the following code from /etc/xen/scripts/blktap
if [ "$mode" != '!' ]
result=$(check_blktap_sharing "$file" "$mode")
[ "$result" = 'ok' ] || ebusy "$file already in use by other domain"
fixes the problem allowing it to start with as many as 16 disks.
This code was added in RHEL-5.2 in the patch xen-blktap-sharing.patch for bug 223259
IMHO, the patch needs to be reverted - it causes a serious regression from
RHEL-5.1, for minimal gain.
Fix built for 5.3 in
$ brew latest-pkg dist-5E-qu-candidate xen
Build Tag Built by
---------------------------------------- -------------------- ----------------
xen-3.0.3-65.el5 dist-5E-qu-candidate berrange
* Thu May 8 2008 Daniel P. Berrange <email@example.com> - 3.0.3-65.el5
- Remove blktap sharing patch which prevents guests with large
numbers of disks booting (rhbz #445476)
Please clone this bug for 5.2.z
We tested start domain with more than 4 disks applying blktap patch.
The result is no problem.
We think the patch is not concerning with your test result.
We would like to check the race condition of other domain's disk and test
Thank you. Fujitsu) Nishi
The problem is a race condition that depends on many factors. On some machines >
3 disks causes a problem, on other machines it only impacts > 8 disks. When it
hits the errors are clearly coming from the blktap sharing patch. The problem is
that it is trying to read info about other disks out of xenstore, while XenD is
still writing the entries into xenstore.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.