$ qemu-system-x86_64 -nographic -cdrom gluster://192.168.124.100/gv0/boot-with-serial.iso
[2014-12-07 00:56:30.280951] E [rpc-clnt.c:369:saved_frames_unwind] (-->/lib64/libgfrpc.so.0(rpc_clnt_notify+0x48) [0x7f59dae77918] (-->/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xb7) [0x7f59dae75be7] (-->/lib64/libgfrpc.so.0(saved_frames_destroy+0xe) [0x7f59dae75b0e]))) 0-gfapi: forced unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called at 2014-12-07 00:56:30.280302 (xid=0x1)
[2014-12-07 00:56:30.281051] E [glfs-mgmt.c:586:mgmt_getspec_cbk] 0-glfs-mgmt: failed to fetch volume file (key:gv0)
[2014-12-07 00:56:30.281099] E [glfs-mgmt.c:680:mgmt_rpc_notify] 0-glfs-mgmt: failed to connect with remote-host: 192.168.124.100 (No data available)
qemu-system-x86_64: -cdrom gluster://192.168.124.100/gv0/boot-with-serial.iso: could not open disk image gluster://192.168.124.100/gv0/boot-with-serial.iso: Gluster connection failed for server=192.168.124.100 port=0 volume=gv0 image=boot-with-serial.iso transport=tcp: Transport endpoint is not connected
But if I run with sudo, it works:
$ sudo qemu-system-x86_64 -nographic -cdrom gluster://192.168.124.100/gv0/boot-with-serial.iso
Linux version 2.6.18-92.el5 (firstname.lastname@example.org) (gcc version 4.1.2 20071124 (Red Hat 4.1.2-41)) #1 SMP Tue Apr 29 13:16:15 EDT 2008
Command line: initrd=initrd.img console=tty0 console=ttyS0,115200n8 BOOT_IMAGE=vmlinuz
This also affects VMs launched with libvirt as well, since they are launched with reduced privileges. I don't know if this is a regression or not since it's the first time I'm running with gluster + qemu.
Host and server running:
$ $ rpm -q qemu-system-x86 libvirt-daemon glusterfs
I'm temped to close this as NOTABUG, assuming you have missed this step:
Please confirm. Thanks!
(In reply to Niels de Vos from comment #1)
> I'm temped to close this as NOTABUG, assuming you have missed this step:
Oh, and also the rpc-auth-allow-insecure option mentioned in the paragraph just before that.
Indeed that fixed it. Sorry, I tried googling but somehow missed bug 1057292