Bug 763171 (GLUSTER-1439) - cluster/afr issue with "option volume-filename.default" on server volfile
Summary: cluster/afr issue with "option volume-filename.default" on server volfile
Keywords:
Status: CLOSED WONTFIX
Alias: GLUSTER-1439
Product: GlusterFS
Classification: Community
Component: core
Version: mainline
Hardware: All
OS: Linux
low
medium
Target Milestone: ---
Assignee: Anand Avati
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-08-25 18:41 UTC by Bernard Li
Modified: 2015-12-01 16:45 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Bernard Li 2010-08-25 18:41:19 UTC
This was originally posted to the gluster-users mailing-list:

http://gluster.org/pipermail/gluster-users/2010-June/004879.html

I have a simple cluster/afr setup but am having trouble mounting a
volume by retrieving the default volfile from the server via the
option "volume-filename.default".

Here are the volfiles:

[server]

volume posix
  type storage/posix
  option directory /export/gluster
end-volume

volume locks
    type features/locks
    subvolumes posix
end-volume

volume brick
    type performance/io-threads
    option thread-count 8
    subvolumes locks
end-volume

volume server
    type protocol/server
    option transport-type tcp
    option auth.addr.brick.allow *
    option listen-port 6996
    option volume-filename.default /etc/glusterfs/glusterfs.vol
    subvolumes brick
end-volume

[client]

volume gluster1
    type protocol/client
    option transport-type tcp
    option remote-host 192.168.1.10
    option remote-port 6996
    option remote-subvolume brick
end-volume

volume gluster2
    type protocol/client
    option transport-type tcp
    option remote-host 192.168.1.11
    option remote-port 6996
    option remote-subvolume brick
end-volume

volume gluster
    type cluster/afr
    subvolumes gluster1 gluster2
end-volume

volume writebehind
    type performance/write-behind
    option cache-size 4MB
    subvolumes gluster
end-volume

volume io-cache
    type performance/io-cache
    option cache-size 1GB
    subvolumes writebehind
end-volume

When I mount via `glusterfs -s 192.168.1.10 /mnt/glusterfs` (on
192.168.1.10), I get the following in the logs:

[2010-06-22 11:37:55] N [glusterfsd.c:1408:main] glusterfs: Successfully started
[2010-06-22 11:37:55] N [client-protocol.c:6288:client_setvolume_cbk]
gluster1: Connected to 192.168.1.10:6996, attached to remote volume
'brick'.
[2010-06-22 11:37:55] N [afr.c:2636:notify] gluster: Subvolume
'gluster1' came back up; going online.
[2010-06-22 11:37:55] N [client-protocol.c:6288:client_setvolume_cbk]
gluster1: Connected to 192.168.1.10:6996, attached to remote volume
'brick'.
[2010-06-22 11:37:55] N [afr.c:2636:notify] gluster: Subvolume
'gluster1' came back up; going online.
[2010-06-22 11:37:55] N [fuse-bridge.c:2953:fuse_init] glusterfs-fuse:
FUSE inited with protocol versions: glusterfs 7.13 kernel 7.10

Note it only mentioned "gluster1" and nothing on "gluster2".

If I touch a file in /mnt/glusterfs, on the backend, the file only
shows up on gluster1 and not gluster2

When I mount via `glusterfs -s 192.168.1.11 /mnt/glusterfs` (on
192.168.1.10), I get the following in the logs:

[2010-06-22 11:46:24] N [glusterfsd.c:1408:main] glusterfs: Successfully started
[2010-06-22 11:46:24] N [fuse-bridge.c:2953:fuse_init] glusterfs-fuse:
FUSE inited with protocol versions: glusterfs 7.13 kernel 7.10
[2010-06-22 11:46:30] W [fuse-bridge.c:725:fuse_attr_cbk]
glusterfs-fuse: 2: LOOKUP() / => -1 (Transport endpoint is not
connected)

When I mount directly via the volfile, as in `glusterfs -f
/etc/glusterfs/glusterfs.vol /mnt/glusterfs` (on 192.168.1.10), then
everything works as expected.  Here's the log:

[2010-06-22 11:39:47] N [glusterfsd.c:1408:main] glusterfs: Successfully started
[2010-06-22 11:39:47] N [client-protocol.c:6288:client_setvolume_cbk]
gluster1: Connected to 192.168.1.10:6996, attached to remote volume
'brick'.
[2010-06-22 11:39:47] N [afr.c:2636:notify] gluster: Subvolume
'gluster1' came back up; going online.
[2010-06-22 11:39:47] N [client-protocol.c:6288:client_setvolume_cbk]
gluster1: Connected to 192.168.1.10:6996, attached to remote volume
'brick'.
[2010-06-22 11:39:47] N [afr.c:2636:notify] gluster: Subvolume
'gluster1' came back up; going online.
[2010-06-22 11:39:47] N [fuse-bridge.c:2953:fuse_init] glusterfs-fuse:
FUSE inited with protocol versions: glusterfs 7.13 kernel 7.10
[2010-06-22 11:39:47] N [client-protocol.c:6288:client_setvolume_cbk]
gluster2: Connected to 192.168.1.11:6996, attached to remote volume
'brick'.
[2010-06-22 11:39:47] N [client-protocol.c:6288:client_setvolume_cbk]
gluster2: Connected to 192.168.1.11:6996, attached to remote volume
'brick'.
[2010-06-22 11:41:03] N [fuse-bridge.c:3143:fuse_thread_proc]
glusterfs-fuse: terminating upon getting ENODEV when reading /dev/fuse

Is this a known issue?  Or am I doing something unsupported?

Comment 1 Amar Tumballi 2011-01-21 08:23:57 UTC
Bernard,

I would like to close this issue as we have worked on whole different way of volume (config file) management. This will no more be an issue with 3.1.x releases.

Regards,
Amar


Note You need to log in before you can comment on or make changes to this bug.