Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1607767

Summary: New setup: Failed to get the 'volume file' from server
Product: [Community] GlusterFS Reporter: Julian <julian.dehn>
Component: glusterdAssignee: bugs <bugs>
Status: CLOSED NOTABUG QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 4.1CC: bugs
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-07-25 10:14:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Julian 2018-07-24 08:41:16 UTC
Description of problem:
I just installed GlusterFS 4.1 on two CentOS-machines and created a replicated volume.
I want to access the volume directly from each server GlusterFS is installed on.

Mounting the volume with "mount -t glusterfs localhost:/volume /mnt/volume" fails with the message (logfile) "0-gluserfs: failed to get the 'volume file' from server"

Ports 24007-24008 (tcp, udp) and 49152-49153 (tcp) are opened. 

Servernames were changed to "server" in the information below:

[root@server-i /]# gluster peer status
Number of Peers: 1

Hostname: server-ii
Uuid: def2a899-9fdf-475f-877a-8b7764f30dc4
State: Peer in Cluster (Connected)

[root@server-i /]# gluster volume info

Volume Name: server-test-volume
Type: Replicate
Volume ID: a30dc295-586a-4cbc-9e1f-fb6741f0bb63
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: server-i:/gluster/server-test-brick/volume
Brick2: server-ii:/gluster/server-test-brick/volume
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

[root@server-test-i /]# gluster volume status
Status of volume: server-test-volume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick server-test-i:/gluster/server-test-brick/volume
                                            49152     0          Y       1576
Brick server-test-ii:/gluster/server-test-brick/volume
                                            49152     0          Y       21791
Self-heal Daemon on localhost               N/A       N/A        Y       12579
Self-heal Daemon on server-ii               N/A       N/A        Y       37808

Task Status of Volume server-test-volume
------------------------------------------------------------------------------
There are no active volume tasks


Version-Release number of selected component (if applicable):

[root@server-test-i /]# yum list installed | grep gluster
centos-release-gluster41.x86_64    1.0-1.el7.centos            @extras
glusterfs.x86_64                   4.1.1-1.el7                 @centos-gluster41
glusterfs-api.x86_64               4.1.1-1.el7                 @centos-gluster41
glusterfs-cli.x86_64               4.1.1-1.el7                 @centos-gluster41
glusterfs-client-xlators.x86_64    4.1.1-1.el7                 @centos-gluster41
glusterfs-fuse.x86_64              4.1.1-1.el7                 @centos-gluster41
glusterfs-libs.x86_64              4.1.1-1.el7                 @centos-gluster41
glusterfs-server.x86_64            4.1.1-1.el7                 @centos-gluster41
userspace-rcu.x86_64               0.10.0-3.el7                @centos-gluster41

Comment 1 Julian 2018-07-24 08:48:06 UTC
Ignore different servernames:
server-i = server-test-i
server-ii = server-test-ii

This is a mistake of mine by editing real names to "fake" names.

Comment 2 Julian 2018-07-25 10:14:38 UTC
My fault - you shouldnt mess bricknames and volumenames around..