Bug 1096425 - i/o error when one user tries to access RHS volume over NFS with 100+ GIDs
Summary: i/o error when one user tries to access RHS volume over NFS with 100+ GIDs
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: nfs
Version: 3.5.0
Hardware: All
OS: Linux
urgent
urgent
Target Milestone: ---
Assignee: Niels de Vos
QA Contact:
URL:
Whiteboard:
Depends On: 1053579 1104997
Blocks: glusterfs-3.5.1
TreeView+ depends on / blocked
 
Reported: 2014-05-10 02:16 UTC by Niels de Vos
Modified: 2014-06-24 11:05 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.5.1beta2
Doc Type: Bug Fix
Doc Text:
Clone Of: 1053579
Environment:
Last Closed: 2014-06-24 11:05:21 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Niels de Vos 2014-05-10 02:16:25 UTC
--- Additional comment from Niels de Vos on 2013-12-20 12:32:40 EST ---

(In reply to Niels de Vos from comment #4)
> I guess this the same as bug 1032359. The number of groups for a reproducer
> is 100 (not exeding 200) and the nfs-server uses the gid-cache functions.
> 
> I'll try to confirm this and will leave a note with the results.

It seems I'm wrong here, a user with 128 additional groups can not access the contents of an NFS-mount, whereas root with only few groups can.

Starting the NFS-service with --log-level=DEBUG gives the following information in the nfs.log upon accessing (the NFS-client gets the error immediately):

[2013-12-20 22:24:37.874048] D [nfs3-helpers.c:1629:nfs3_log_common_call] 0-nfs-nfsv3: XID: a8aa1a2, ACCESS: args: FH: exportid 2827ec5f-fa7f-420c-9933-85600438c7e3, gfid 00000000-0000-0000-0000-000000000001
[2013-12-20 22:24:37.874267] W [xdr-rpcclnt.c:79:rpc_request_to_xdr] 0-rpc: failed to encode call msg
[2013-12-20 22:24:37.874294] D [rpc-clnt.c:1188:rpc_clnt_record_build_header] 0-rpc-clnt: Failed to create RPC request
[2013-12-20 22:24:37.874303] E [rpc-clnt.c:1263:rpc_clnt_record_build_record] 0-test-vol-client-0: Failed to build record header
[2013-12-20 22:24:37.874312] W [rpc-clnt.c:1323:rpc_clnt_record] 0-test-vol-client-0: cannot build rpc-record
[2013-12-20 22:24:37.874320] W [rpc-clnt.c:1464:rpc_clnt_submit] 0-test-vol-client-0: cannot build rpc-record
[2013-12-20 22:24:37.874331] W [client-rpc-fops.c:1369:client3_3_access_cbk] 0-test-vol-client-0: remote operation failed: Transport endpoint is not connected
[2013-12-20 22:24:37.874347] W [xdr-rpcclnt.c:79:rpc_request_to_xdr] 0-rpc: failed to encode call msg
[2013-12-20 22:24:37.874355] D [rpc-clnt.c:1188:rpc_clnt_record_build_header] 0-rpc-clnt: Failed to create RPC request
[2013-12-20 22:24:37.874377] E [rpc-clnt.c:1263:rpc_clnt_record_build_record] 0-test-vol-client-1: Failed to build record header
[2013-12-20 22:24:37.874385] W [rpc-clnt.c:1323:rpc_clnt_record] 0-test-vol-client-1: cannot build rpc-record
[2013-12-20 22:24:37.874392] W [rpc-clnt.c:1464:rpc_clnt_submit] 0-test-vol-client-1: cannot build rpc-record
[2013-12-20 22:24:37.874400] W [client-rpc-fops.c:1369:client3_3_access_cbk] 0-test-vol-client-1: remote operation failed: Transport endpoint is not connected
[2013-12-20 22:24:37.874410] W [nfs3.c:1520:nfs3svc_access_cbk] 0-nfs: a8aa1a2: / => -1 (Transport endpoint is not connected)
[2013-12-20 22:24:37.874424] W [nfs3-helpers.c:3391:nfs3_log_common_res] 0-nfs-nfsv3: XID: a8aa1a2, ACCESS: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
[2013-12-20 22:24:37.874490] D [client.c:234:client_submit_request] 0-test-vol-client-1: rpc_clnt_submit failed
[2013-12-20 22:24:37.874506] D [client.c:234:client_submit_request] 0-test-vol-client-0: rpc_clnt_submit failed
[2013-12-20 22:24:50.029357] D [client-handshake.c:185:client_start_ping] 0-test-vol-client-0: returning as transport is already disconnected OR there are no frames (0 || 0)
[2013-12-20 22:24:50.029441] D [client-handshake.c:185:client_start_ping] 0-test-vol-client-1: returning as transport is already disconnected OR there are no frames (0 || 0)

Without unmounting or restartinf, the root user can access the NFS-mount without issues.

--- Additional comment from Niels de Vos on 2013-12-23 05:13:20 EST ---

> Thanks for looking into it. The cursory code walk through shows that
> Gluster NFS does not support more than 16 auxgids. I am not sure how
> to repro the issue. Could you please tell us the steps to reproduce
> the issue so that it would be easier to verify the FIX (iff I get
> one :) ).

The RPC/UNIX_AUTH standard that is used to pass the uid/gid and 
aux-groups from the NFS-client to the NFS-server limits the number of 
aux-groups to 16. This is defined by the standard and the GlusterFS/NFS 
server should not increase this number.

A solution for this is to have the NFS-server discard the aux-groups 
that it receives in the RPC/AUTH_UNIX header, and resolve the aux-groups 
for the user itself. This is what the 'nfs.server-aux-gids' enables.

Reproducing the issue is pretty simple:

1. create a user guest
   # useradd guest
2. create a lot of additional groups, and add 'guest' to these groups
   # for I in $(seq 0 127) ; do \
       groupadd guest-$I ; usermod -a -G guest-$I guest ; done
3. copy the entries in /etc/passwd,shadow,group for 'guest' to both 
   NFS-server and client
4. create a volume and export it over NFS
5. mount the NFS-export on the NFS-client (as root) on /mnt
6. as root, check the contents of the mountpoint:
   # ls -l /mnt/
7. as guest, check the contents of the mountpoint:
   # su -c 'ls -l /mnt/' guest
8. enable 'nfs.server-aux-gids' on the volume with 'gluster volume set'
9. as root, check the contents of the mountpoint:
   # ls -l /mnt/
10. as guest, check the contents of the mountpoint:
    # su -c 'ls -l /mnt/' guest

I expect that step 10 will fail, where it should continue. The 'guest' 
user could have a directory on the mountpoint where only group 
permissions allow (read and/or write) access. Any of the groups created 
in step 2 should allow the 'guest' to read/create/write files. The 
group-permissions would be affected in case 'nfs.server-aux-gids' is not 
enabled, only the first 16 groups would be usable. (RPC/AUTH_UNIX does 
not require any sorting of the groups, so the permissions might seem 
random.)

--- Additional comment from Anand Avati on 2014-01-15 14:09:01 CET ---

REVIEW: http://review.gluster.org/6715 (gNFS: I/O Error with more than 128 aux-gids) posted (#1) for review on master by Santosh Pradhan (spradhan)

--- Additional comment from Anand Avati on 2014-01-15 14:18:35 CET ---

REVIEW: http://review.gluster.org/6715 (gNFS: I/O Error with more than 128 aux-gids) posted (#2) for review on master by Santosh Pradhan (spradhan)

--- Additional comment from Anand Avati on 2014-01-15 14:21:09 CET ---

REVIEW: http://review.gluster.org/6715 (gNFS: I/O Error with more than 128 aux-gids) posted (#3) for review on master by Santosh Pradhan (spradhan)

--- Additional comment from Anand Avati on 2014-03-06 17:40:02 CET ---

REVIEW: http://review.gluster.org/7202 (rpc: warn and truncate grouplist when more then 93 groups are used) posted (#1) for review on master by Niels de Vos (ndevos)

--- Additional comment from Anand Avati on 2014-03-07 18:32:10 CET ---

REVIEW: http://review.gluster.org/7202 (rpc: warn and truncate grouplist when more then 93 groups are used) posted (#2) for review on master by Niels de Vos (ndevos)

--- Additional comment from Anand Avati on 2014-03-20 18:21:44 CET ---

REVIEW: http://review.gluster.org/7202 (rpc: warn and truncate grouplist if RPC/AUTH can not hold everything) posted (#3) for review on master by Niels de Vos (ndevos)

--- Additional comment from Dylan Gross on 2014-04-03 20:17:00 CEST ---


Marked Comment #1 as Private at the request of the customer mentioned by name (as a result of the cloning and copying of Private comments from the BZ#1044646 Comment #7).

--- Additional comment from Anand Avati on 2014-04-08 19:51:04 CEST ---

COMMIT: http://review.gluster.org/7202 committed in master by Vijay Bellur (vbellur) 
------
commit 8235de189845986a535d676b1fd2c894b9c02e52
Author: Niels de Vos <ndevos>
Date:   Thu Mar 20 18:13:49 2014 +0100

    rpc: warn and truncate grouplist if RPC/AUTH can not hold everything
    
    The GlusterFS protocol currently uses AUTH_GLUSTERFS_V2 in the RPC/AUTH
    header. This header contains the uid, gid and auxiliary groups of the
    user/process that accesses the Gluster Volume.
    
    The AUTH_GLUSTERFS_V2 structure allows up to 65535 auxiliary groups to
    be passed on. Unfortunately, the RPC/AUTH header is limited to 400 bytes
    by the RPC specification: http://tools.ietf.org/html/rfc5531#section-8.2
    
    In order to not cause complete failures on the client-side when trying
    to encode a AUTH_GLUSTERFS_V2 that would result in more than 400 bytes,
    we can calculate the expected size of the other elements:
    
        1 | pid
        1 | uid
        1 | gid
        1 | groups_len
       XX | groups_val (GF_MAX_AUX_GROUPS=65535)
        1 | lk_owner_len
       YY | lk_owner_val (GF_MAX_LOCK_OWNER_LEN=1024)
      ----+-------------------------------------------
        5 | total xdr-units
    
      one XDR-unit is defined as BYTES_PER_XDR_UNIT = 4 bytes
      MAX_AUTH_BYTES = 400 is the maximum, this is 100 xdr-units.
      XX + YY can be 95 to fill the 100 xdr-units.
    
      Note that the on-wire protocol has tighter requirements than the
      internal structures. It is possible for xlators to use more groups and
      a bigger lk_owner than that can be sent by a GlusterFS-client.
    
    This change prevents overflows when allocating the RPC/AUTH header. Two
    new macros are introduced to calculate the number of groups that fit in
    the RPC/AUTH header, when taking the size of the lk_owner in account. In
    case the list of groups exceeds the maximum possible, only the first
    groups are passed over the RPC/GlusterFS protocol to the bricks.
    A warning is added to the logs, so that most system administrators will
    get informed.
    
    The reducing of the number of groups is not a new inventions. The
    RPC/AUTH header (AUTH_SYS or AUTH_UNIX) that NFS uses has a limit of 16
    groups. Most, if not all, NFS-clients will reduce any bigger number of
    groups to 16. (nfs.server-aux-gids can be used to workaround the limit
    of 16 groups, but the Gluster NFS-server will be limited to a maximum of
    93 groups, or fewer in case the lk_owner structure contains more items.)
    
    Change-Id: I8410e59d0fd246d601b54b961d3ae9cb5a858c10
    BUG: 1053579
    Signed-off-by: Niels de Vos <ndevos>
    Reviewed-on: http://review.gluster.org/7202
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Harshavardhana <harsha>
    Reviewed-by: Santosh Pradhan <spradhan>
    Reviewed-by: Vijay Bellur <vbellur>

--- Additional comment from Anand Avati on 2014-04-17 18:43:07 CEST ---

REVIEW: http://review.gluster.org/7501 (protocol: implement server.manage-gids for group resolving on the bricks) posted (#1) for review on master by Niels de Vos (ndevos)

--- Additional comment from Anand Avati on 2014-04-25 14:44:20 CEST ---

REVIEW: http://review.gluster.org/7501 (rpc: implement server.manage-gids for group resolving on the bricks) posted (#2) for review on master by Niels de Vos (ndevos)

--- Additional comment from Anand Avati on 2014-04-26 08:52:47 CEST ---

REVIEW: http://review.gluster.org/7501 (rpc: implement server.manage-gids for group resolving on the bricks) posted (#3) for review on master by Niels de Vos (ndevos)

--- Additional comment from Anand Avati on 2014-04-26 11:43:04 CEST ---

REVIEW: http://review.gluster.org/7501 (rpc: implement server.manage-gids for group resolving on the bricks) posted (#4) for review on master by Niels de Vos (ndevos)

--- Additional comment from Anand Avati on 2014-04-26 12:23:05 CEST ---

REVIEW: http://review.gluster.org/7501 (rpc: implement server.manage-gids for group resolving on the bricks) posted (#5) for review on master by Niels de Vos (ndevos)

--- Additional comment from Anand Avati on 2014-04-26 12:38:37 CEST ---

REVIEW: http://review.gluster.org/7501 (rpc: implement server.manage-gids for group resolving on the bricks) posted (#6) for review on master by Niels de Vos (ndevos)

--- Additional comment from Anand Avati on 2014-04-26 18:59:40 CEST ---

REVIEW: http://review.gluster.org/7501 (rpc: implement server.manage-gids for group resolving on the bricks) posted (#7) for review on master by Niels de Vos (ndevos)

--- Additional comment from Anand Avati on 2014-04-27 14:22:30 CEST ---

REVIEW: http://review.gluster.org/7501 (rpc: implement server.manage-gids for group resolving on the bricks) posted (#8) for review on master by Niels de Vos (ndevos)

--- Additional comment from Anand Avati on 2014-05-09 21:22:46 CEST ---

COMMIT: http://review.gluster.org/7501 committed in master by Anand Avati (avati) 
------
commit 2fd499d148fc8865c77de8b2c73fe0b7e1737882
Author: Niels de Vos <ndevos>
Date:   Thu Apr 17 18:32:07 2014 +0200

    rpc: implement server.manage-gids for group resolving on the bricks
    
    The new volume option 'server.manage-gids' can be enabled in
    environments where a user belongs to more than the current absolute
    maximum of 93 groups. This option triggers the following behavior:
    
    1. The AUTH_GLUSTERFS structure sent by GlusterFS clients (fuse, nfs or
       libgfapi) will contain only one (1) auxiliary group, instead of
       a full list. This reduces network usage and prevents problems in
       encoding the AUTH_GLUSTERFS structure which should fit in 400 bytes.
    2. The single group in the RPC Calls received by the server is replaced
       by resolving the groups server-side. Permission checks and similar in
       lower xlators are applied against the full list of groups where the
       user belongs to, and not the single auxiliary group that the client
       sent.
    
    Change-Id: I9e540de13e3022f8b63ff893ecba511129a47b91
    BUG: 1053579
    Signed-off-by: Niels de Vos <ndevos>
    Reviewed-on: http://review.gluster.org/7501
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Santosh Pradhan <spradhan>
    Reviewed-by: Harshavardhana <harsha>
    Reviewed-by: Anand Avati <avati>

Comment 1 Anand Avati 2014-05-21 12:55:44 UTC
REVIEW: http://review.gluster.org/7829 (rpc: warn and truncate grouplist if RPC/AUTH can not hold everything) posted (#1) for review on release-3.5 by Niels de Vos (ndevos)

Comment 2 Anand Avati 2014-05-21 12:55:55 UTC
REVIEW: http://review.gluster.org/7830 (rpc: implement server.manage-gids for group resolving on the bricks) posted (#1) for review on release-3.5 by Niels de Vos (ndevos)

Comment 3 Anand Avati 2014-05-22 13:02:32 UTC
COMMIT: http://review.gluster.org/7829 committed in release-3.5 by Niels de Vos (ndevos) 
------
commit 57ec16e7f6d08b9a1c07f8ece3db630b08557372
Author: Niels de Vos <ndevos>
Date:   Sun May 11 22:51:15 2014 -0300

    rpc: warn and truncate grouplist if RPC/AUTH can not hold everything
    
    The GlusterFS protocol currently uses AUTH_GLUSTERFS_V2 in the RPC/AUTH
    header. This header contains the uid, gid and auxiliary groups of the
    user/process that accesses the Gluster Volume.
    
    The AUTH_GLUSTERFS_V2 structure allows up to 65535 auxiliary groups to
    be passed on. Unfortunately, the RPC/AUTH header is limited to 400 bytes
    by the RPC specification: http://tools.ietf.org/html/rfc5531#section-8.2
    
    In order to not cause complete failures on the client-side when trying
    to encode a AUTH_GLUSTERFS_V2 that would result in more than 400 bytes,
    we can calculate the expected size of the other elements:
    
        1 | pid
        1 | uid
        1 | gid
        1 | groups_len
       XX | groups_val (GF_MAX_AUX_GROUPS=65535)
        1 | lk_owner_len
       YY | lk_owner_val (GF_MAX_LOCK_OWNER_LEN=1024)
      ----+-------------------------------------------
        5 | total xdr-units
    
      one XDR-unit is defined as BYTES_PER_XDR_UNIT = 4 bytes
      MAX_AUTH_BYTES = 400 is the maximum, this is 100 xdr-units.
      XX + YY can be 95 to fill the 100 xdr-units.
    
      Note that the on-wire protocol has tighter requirements than the
      internal structures. It is possible for xlators to use more groups and
      a bigger lk_owner than that can be sent by a GlusterFS-client.
    
    This change prevents overflows when allocating the RPC/AUTH header. Two
    new macros are introduced to calculate the number of groups that fit in
    the RPC/AUTH header, when taking the size of the lk_owner in account. In
    case the list of groups exceeds the maximum possible, only the first
    groups are passed over the RPC/GlusterFS protocol to the bricks.
    A warning is added to the logs, so that most system administrators will
    get informed.
    
    The reducing of the number of groups is not a new inventions. The
    RPC/AUTH header (AUTH_SYS or AUTH_UNIX) that NFS uses has a limit of 16
    groups. Most, if not all, NFS-clients will reduce any bigger number of
    groups to 16. (nfs.server-aux-gids can be used to workaround the limit
    of 16 groups, but the Gluster NFS-server will be limited to a maximum of
    93 groups, or fewer in case the lk_owner structure contains more items.)
    
    Cherry picked from commit 8235de189845986a535d676b1fd2c894b9c02e52:
    > BUG: 1053579
    > Signed-off-by: Niels de Vos <ndevos>
    > Reviewed-on: http://review.gluster.org/7202
    > Tested-by: Gluster Build System <jenkins.com>
    > Reviewed-by: Harshavardhana <harsha>
    > Reviewed-by: Santosh Pradhan <spradhan>
    > Reviewed-by: Vijay Bellur <vbellur>
    
    Change-Id: I8410e59d0fd246d601b54b961d3ae9cb5a858c10
    BUG: 1096425
    Signed-off-by: Niels de Vos <ndevos>
    Reviewed-on: http://review.gluster.org/7829
    Reviewed-by: Lalatendu Mohanty <lmohanty>
    Tested-by: Gluster Build System <jenkins.com>

Comment 4 Anand Avati 2014-05-23 08:34:10 UTC
COMMIT: http://review.gluster.org/7830 committed in release-3.5 by Niels de Vos (ndevos) 
------
commit 6b624e5502193b9d57116fb341119c8468f9758f
Author: Niels de Vos <ndevos>
Date:   Tue May 20 16:12:03 2014 +0200

    rpc: implement server.manage-gids for group resolving on the bricks
    
    The new volume option 'server.manage-gids' can be enabled in
    environments where a user belongs to more than the current absolute
    maximum of 93 groups. This option triggers the following behavior:
    
    1. The AUTH_GLUSTERFS structure sent by GlusterFS clients (fuse, nfs or
       libgfapi) will contain only one (1) auxiliary group, instead of
       a full list. This reduces network usage and prevents problems in
       encoding the AUTH_GLUSTERFS structure which should fit in 400 bytes.
    2. The single group in the RPC Calls received by the server is replaced
       by resolving the groups server-side. Permission checks and similar in
       lower xlators are applied against the full list of groups where the
       user belongs to, and not the single auxiliary group that the client
       sent.
    
    Cherry picked from commit 2fd499d148fc8865c77de8b2c73fe0b7e1737882:
    > BUG: 1053579
    > Signed-off-by: Niels de Vos <ndevos>
    > Reviewed-on: http://review.gluster.org/7501
    > Tested-by: Gluster Build System <jenkins.com>
    > Reviewed-by: Santosh Pradhan <spradhan>
    > Reviewed-by: Harshavardhana <harsha>
    > Reviewed-by: Anand Avati <avati>
    
    Change-Id: I9e540de13e3022f8b63ff893ecba511129a47b91
    BUG: 1096425
    Signed-off-by: Niels de Vos <ndevos>
    Reviewed-on: http://review.gluster.org/7830
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Santosh Pradhan <spradhan>

Comment 5 Niels de Vos 2014-05-25 09:08:16 UTC
The first (and last?) Beta for GlusterFS 3.5.1 has been released [1]. Please verify if the release solves this bug report for you. In case the glusterfs-3.5.1beta release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED.

Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-May/040377.html
[2] http://supercolony.gluster.org/pipermail/gluster-users/

Comment 6 Niels de Vos 2014-06-05 07:44:35 UTC
The server.manage-gids option causes an issue related to the op-version. Setting server.manage-gids increases the op-version to 4 (intentional), but the maximum op-version that glusterd supports is 3 (in 3.5.0). A restart of glusterd fails because it can not start a volume that requires a higher op-version than it supports.

Discussion on the devel list:
- http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6699

Comment 7 Anand Avati 2014-06-08 16:41:34 UTC
REVIEW: http://review.gluster.org/8010 (glusterd: Better op-version values and ranges) posted (#1) for review on release-3.5 by Niels de Vos (ndevos)

Comment 8 Anand Avati 2014-06-10 07:47:25 UTC
COMMIT: http://review.gluster.org/8010 committed in release-3.5 by Niels de Vos (ndevos) 
------
commit 6648b92e980c9d59c719a461b37951109839182e
Author: Niels de Vos <ndevos>
Date:   Sun Jun 8 18:39:55 2014 +0200

    glusterd: Better op-version values and ranges
    
    Till now, the op-version was an incrementing integer that was
    incremented by 1 for every Y release (when using the X.Y.Z release
    numbering). This is not flexible enough to handle backports of features
    into Z releases.
    
    Going forward, from the upcoming 3.6.0 and 3.5.1 releases, the
    op-versions will be multi-digit integer values composed of the version
    numbers, instead of a simple incrementing integer. An X.Y.Z release will
    have XYZ as its op-version. Y and Z will always be 2 digits wide and
    will be padded with 0 if required. This way of bumping op-versions
    allows for gaps in between the subsequent Y releases. These gaps will
    allow backporting features from new Y releases into old Z releases.
    
    Change-Id: Ib6a09989f03521146e299ec0588fe36273191e47
    Depends-on: http://review.gluster.org/7963
    BUG: 1096425
    Signed-off-by: Niels de Vos <ndevos>
    Reviewed-on: http://review.gluster.org/8010
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Atin Mukherjee <amukherj>

Comment 9 Niels de Vos 2014-06-10 16:51:55 UTC
The second (and last?) Beta for GlusterFS 3.5.1 has been released [1]. Please verify if the release solves this bug report for you. In case the glusterfs-3.5.1beta2 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED.

Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-June/040547.html
[2] http://supercolony.gluster.org/pipermail/gluster-users/

Comment 10 Niels de Vos 2014-06-24 11:05:21 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.1, please reopen this bug report.

glusterfs-3.5.1 has been announced on the Gluster Users mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-June/040723.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.