Hide Forgot
Description of problem: Setup: 4 Xen-VMs share the same disk image that should be mounted read-only by Xen. When I start the VMs in sequential order (one after another) using python-libvirt, everything works. If I start the VMs in parallel (using multiple python threads) I hit a race condition and get the following error from Xen: Device 51713 (vbd) could not be connected. File /home/xge/imagepool/2/a64/disk.img is loopback-mounted through /dev/loop0 /dev/loop3, which is mounted read-write in a guest domain, and so cannot be mounted read-only now It seems libvirt mounts the disk image read-write for a short period of time and then switches to read-only. If I add a small delay between the thread creation, it also works. Nevertheless, this is not a good solution :) Version-Release number of selected component (if applicable): Using the package from Debian-Squeeze 0.8.3-5+squeeze2 Xen in Version 4.0.1 and the Kernel is 2.6.32-5-xen-amd64 How reproducible: Use libvirt and start multiple VMs using the same read-only disk image in parallel. Steps to Reproduce: 1. Create a VM disk image 2. Write some python-libvirt code to start multiple VMs in parallel 3. Set the <readonly> flag in the <disk> section of the XML Actual results: Xen reports an error. Some VMs are started, some not. Expected results: All VMs are running and share the same disk image that is accessed r/o-only. Additional info:
Sorry this never received a timely response. Libvirt doesn't mount disks r/w itself that I know of. This may have been a xen bug. Given the age of the report I'm just closing as DEFERRED, but if you can still reproduce with modern libvirt and xen, please reopen and I will try to help triage it further