Bug 765236 (GLUSTER-3504)

Summary: KVM migration fails : permission denied
Product: [Community] GlusterFS Reporter: Julien Garet <julien.garet>
Component: fuseAssignee: Pranith Kumar K <pkarampu>
Status: CLOSED WORKSFORME QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: pre-releaseCC: amarts, gluster-bugs, vijay, vinaraya
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.3.1, glusterfs-3.4.0qa4 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-12-12 06:51:04 UTC Type: ---
Regression: --- Mount Type: fuse
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Julien Garet 2011-09-02 15:09:44 UTC
When using glusterfs volume as backend for KVM virtual machines and mounting with the fuse client, doing migration (with virsh migrate), leads to virtual machine not starting on the target hypervisor.

Here is the hypervisor's log for that VM :
2011-09-02 15:37:57.341: starting up
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -S -M rhel6.1.0 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name glusterclient1 -uuid 0349c19f-c938-a0b6-a58f-5a66ba366832 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/glusterclient1.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -drive file=/vms/ha/glusterclient1.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=writeback,aio=native -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=22,id=hostnet0,vhost=on,vhostfd=23 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:27:86:ff,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 0.0.0.0:0 -vga cirrus -incoming tcp:0.0.0.0:49167 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
char device redirected to /dev/pts/2
Using CPU model "cpu64-rhel6"
qemu: could not open disk image /vms/ha/glusterclient1.img: Permission denied
qemu: re-open of /vms/ha/glusterclient1.img failed wth error -13
reopening of drives failed
2011-09-02 15:38:09.568: shutting down

Nothing show in the gluster logs. If I use nfs mount, it works (but have performance issues).

This happens with qcow2 or raw image format, with cache=writeback or writethrough, enabling or disable performance.write-behind cache on the volume, tried this workaround : https://github.com/avati/liboindirect with no success

I am using KVM on Scientific Linux 6.1 (equals RHEL  6.1)

Steps to reproduce :
- mount -t glusterfs localhost:/volume /vms
- virsh start domain
- virsh migrate domain qemu+ssh://hypervisor2/system

Comment 1 Pranith Kumar K 2011-09-19 04:51:54 UTC
What is the version you are using?. Is it the latest git?.

Comment 2 Julien Garet 2011-09-19 04:53:30 UTC
I've tested with version 3.3beta2 and 3.2 (packaged as RPMs), not the latest git.

Comment 3 Pranith Kumar K 2011-09-19 13:26:13 UTC
(In reply to comment #2)
> I've tested with version 3.3beta2 and 3.2 (packaged as RPMs), not the latest
> git.

Julien,
   Could you let us know the permissions of the VM files, whether the user belongs to primary group or secondary group etc.

Pranith

Comment 4 Pranith Kumar K 2011-09-19 13:40:44 UTC
(In reply to comment #3)
> (In reply to comment #2)
> > I've tested with version 3.3beta2 and 3.2 (packaged as RPMs), not the latest
> > git.
> 
> Julien,
>    Could you let us know the permissions of the VM files, whether the user
> belongs to primary group or secondary group etc.
> 
> Pranith

Julien,
     Since the same procedure is working with NFS, the bug is most likely a duplicate of 3587. You can actually check if it is working with the following patch: http://review.gluster.com/464

Pranith

Comment 5 Vidya Sakar 2012-08-15 09:49:05 UTC
Julien / Pranith,
Did the patch actually fix the issue or does this issue still persist?
VS

Comment 6 Amar Tumballi 2012-12-12 06:51:04 UTC
with the patch to handle the auxiliary group properly in glusterfs protocol, this issue should now be resolved. please use the 3.3.0+ versions.