Hide Forgot
When using glusterfs volume as backend for KVM virtual machines and mounting with the fuse client, doing migration (with virsh migrate), leads to virtual machine not starting on the target hypervisor. Here is the hypervisor's log for that VM : 2011-09-02 15:37:57.341: starting up LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -S -M rhel6.1.0 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name glusterclient1 -uuid 0349c19f-c938-a0b6-a58f-5a66ba366832 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/glusterclient1.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -drive file=/vms/ha/glusterclient1.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=writeback,aio=native -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=22,id=hostnet0,vhost=on,vhostfd=23 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:27:86:ff,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 0.0.0.0:0 -vga cirrus -incoming tcp:0.0.0.0:49167 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 char device redirected to /dev/pts/2 Using CPU model "cpu64-rhel6" qemu: could not open disk image /vms/ha/glusterclient1.img: Permission denied qemu: re-open of /vms/ha/glusterclient1.img failed wth error -13 reopening of drives failed 2011-09-02 15:38:09.568: shutting down Nothing show in the gluster logs. If I use nfs mount, it works (but have performance issues). This happens with qcow2 or raw image format, with cache=writeback or writethrough, enabling or disable performance.write-behind cache on the volume, tried this workaround : https://github.com/avati/liboindirect with no success I am using KVM on Scientific Linux 6.1 (equals RHEL 6.1) Steps to reproduce : - mount -t glusterfs localhost:/volume /vms - virsh start domain - virsh migrate domain qemu+ssh://hypervisor2/system
What is the version you are using?. Is it the latest git?.
I've tested with version 3.3beta2 and 3.2 (packaged as RPMs), not the latest git.
(In reply to comment #2) > I've tested with version 3.3beta2 and 3.2 (packaged as RPMs), not the latest > git. Julien, Could you let us know the permissions of the VM files, whether the user belongs to primary group or secondary group etc. Pranith
(In reply to comment #3) > (In reply to comment #2) > > I've tested with version 3.3beta2 and 3.2 (packaged as RPMs), not the latest > > git. > > Julien, > Could you let us know the permissions of the VM files, whether the user > belongs to primary group or secondary group etc. > > Pranith Julien, Since the same procedure is working with NFS, the bug is most likely a duplicate of 3587. You can actually check if it is working with the following patch: http://review.gluster.com/464 Pranith
Julien / Pranith, Did the patch actually fix the issue or does this issue still persist? VS
with the patch to handle the auxiliary group properly in glusterfs protocol, this issue should now be resolved. please use the 3.3.0+ versions.