Description of problem: If you connect to gluster from a non-root client, it fails with: $ qemu-img create gluster://server-gluster2/vmdisks/test1 1G Formatting 'gluster://server-gluster2/vmdisks/test1', fmt=raw size=1073741824 qemu-img: Gluster connection failed for server=server-gluster2 port=0 volume=vmdisks image=test1 transport=tcp qemu-img: gluster://server-gluster2/vmdisks/test1: error while creating raw: Transport endpoint is not connected The real error is completely hidden, but if you happen to find the right file on the right server, you can see it's: [2014-01-23 18:44:33.788604] E [rpcsvc.c:521:rpcsvc_handle_rpc_call] 0-glusterd: Request received from non-privileged port. Failing request To avoid this you have to edit /etc/glusterfs/glusterd.vol and add (on all bricks AFAICT): option rpc-auth-allow-insecure on and restart glusterd. Seriously, no one uses port numbers to guarantee security. It's not 1980. This setting should default to ON. Version-Release number of selected component (if applicable): 3.4.2. How reproducible: 100% Steps to reproduce: Try to use glusterfs.
You can actually change the volume setting through the cli by setting server.allow-insecure This change in the default behavior should be conditional on ssl being enabled. It's also definitely not reproducible with your steps. Most users "Try to use glusterfs" without encountering your issue. Running mount as a non-root user results in "mount: only root can do that" precluding your issue without additional steps. I would consider the more apt bug to be that the client doesn't report the error if it cannot acquire a "secure" port and that deficiency results in the connection being refused.
(In reply to Joe Julian from comment #1) > It's also definitely not reproducible with your steps. Most users "Try to > use glusterfs" without encountering your issue. Running mount as a non-root > user results in "mount: only root can do that" precluding your issue without > additional steps. There are now lots of ways to access gluster without using mount, or root: - qemu, qemu-img - libvirt session storage - libguestfs (multiple tools) and it breaks in the way I described on every one of those.
Joe, I thought that "gluster set vol your-volume allow-insecure on" only affected glusterfsd behavior, not glusterd. To make glusterd function, you still need to edit /etc/glusterfs/glusterd.vol by hand. This is pretty outrageous for a supposedly scalable product to require config-file-editing on each node. Security is a concern, but that's not the user's problem, it's the developer's job. Ceph seems to handle this up front by PKI, couldn't Gluster do something like this to authenticate peers using openssl library when they first communicate? This will impact any large-scale deployment of Gluster that relies on libgfapi being used by an application.
Aren't SSL sockets upstream as of Gluster 3.6? Couldn't they be used for talking to glusterd and reading initial volfile? That would get rid of the biggest problem, having to edit /etc/glusterfs/glusterd.vol by hand, right?
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5. This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs". If there is no response by the end of the month, this bug will get automatically closed.
I see that defaults for glusterfs-3.7 are "rpc-auth-allow-insecure" in /etc/glusterfs/glusterd.vol and "server.allow-insecure" is default for gluster volume parameters! What a pleasant surprise. So you can close this as fixed in RHS 3.1.
Closing upstream based on comment 7.
(In reply to Ben England from comment #7) > I see that defaults for glusterfs-3.7 are "rpc-auth-allow-insecure" in > /etc/glusterfs/glusterd.vol and "server.allow-insecure" is default for > gluster volume parameters! What a pleasant surprise. So you can close this > as fixed in RHS 3.1. I don't think this is right. I just installed latest glusterfs 3.7.2 and I don't see either of them as enabled. [root@f21-docker yum.repos.d]# rpm -qa| grep glusterfs glusterfs-client-xlators-3.7.2-3.fc21.x86_64 glusterfs-api-3.7.2-3.fc21.x86_64 glusterfs-fuse-3.7.2-3.fc21.x86_64 glusterfs-server-3.7.2-3.fc21.x86_64 glusterfs-libs-3.7.2-3.fc21.x86_64 glusterfs-3.7.2-3.fc21.x86_64 glusterfs-cli-3.7.2-3.fc21.x86_64 [root@f21-docker yum.repos.d]# cat /etc/glusterfs/glusterd.vol volume management type mgmt/glusterd option working-directory /var/lib/glusterd option transport-type socket,rdma option transport.socket.keepalive-time 10 option transport.socket.keepalive-interval 2 option transport.socket.read-fail-log off option ping-timeout 30 # option base-port 49152 end-volume [root@f21-docker yum.repos.d]# glusterfs --version glusterfs 3.7.2 built on Jun 23 2015 12:05:44 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. [root@f21-docker yum.repos.d]# gluster vol create vol1 f21-docker:/brick1 force volume create: vol1: success: please start the volume to access data [root@f21-docker yum.repos.d]# gluster v start vol1 volume start: vol1: success [root@f21-docker yum.repos.d]# gluster v info Volume Name: vol1 Type: Distribute Volume ID: 19d7bbdd-bbff-42ba-ac75-b197b508af00 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: f21-docker:/brick1 Options Reconfigured: performance.readdir-ahead: on
FWIW, i also did a group virt setting, that too didn't bring in the allow-insecure option (looking at the virt file, its not part of it, 'guess user/admin needs to set it manually) [root@f21-docker yum.repos.d]# gluster v set vol1 group virt volume set: success [root@f21-docker yum.repos.d]# gluster v info Volume Name: vol1 Type: Distribute Volume ID: 19d7bbdd-bbff-42ba-ac75-b197b508af00 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: f21-docker:/brick1 Options Reconfigured: cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on
GlusterFS 3.4.x has reached end-of-life. If this bug still exists in a later release please reopen this and change the version or open a new bug.
GlusterFS 3.4.x has reached end-of-life.\ \ If this bug still exists in a later release please reopen this and change the version or open a new bug.