Bug 1249925 - DHT-rebalance: Rebalance hangs on distribute volume when glusterd is stopped on peer node
DHT-rebalance: Rebalance hangs on distribute volume when glusterd is stopped ...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
3.7.3
x86_64 Linux
high Severity high
: ---
: ---
Assigned To: Anand Nekkunti
Rebalance
: Triaged
Depends On: 1244527 1245142
Blocks:
  Show dependency treegraph
 
Reported: 2015-08-04 03:27 EDT by Anand Nekkunti
Modified: 2016-01-03 23:50 EST (History)
9 users (show)

See Also:
Fixed In Version: glusterfs-3.7.4
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1245142
Environment:
Last Closed: 2015-09-09 05:38:55 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Anand Nekkunti 2015-08-04 03:27:55 EDT
+++ This bug was initially created as a clone of Bug #1245142 +++

+++ This bug was initially created as a clone of Bug #1244527 +++

Problem statement:
============================

DHT-rebalance: Rebalance hangs on distribute volume when glusterd is stopped on peer node.

operations such as gluster v status not successful, it displays message as 
failed: Another transaction is in progress for great. Please try again after sometime.

Steps/procedure:

1. create a distribut volume using 2 nodes.
2. fuse and nfs mount it and add file/dir etc 
3. add brick and initiate rebalance.
4. while rebalance is happening stop glusterd on second node.
5. check the results.

Actual results:
================

1.rebalance command did not complete, it was hung.
2. no other operations could be done volume.

Expected results:
==================
1. rebalance should have continued even after glusterd waas down on second node.


Output:
===================

[root@casino-vm1 ~]# gluster v create great 10.70.35.57:/rhs/brick2/g0 10.70.35.136:/rhs/brick2/g0
volume create: great: success: please start the volume to access data
[root@casino-vm1 ~]# gluster v start great force
volume start: great: success
[root@casino-vm1 ~]# gluster v info great

Volume Name: great   
Type: Distribute
Volume ID: 2d3b4766-c99d-4113-8f0b-43116aeb931f
Status: Started
Number of Bricks: 2  
Transport-type: tcp  
Bricks:
Brick1: 10.70.35.57:/rhs/brick2/g0
Brick2: 10.70.35.136:/rhs/brick2/g0
Options Reconfigured:
performance.readdir-ahead: on
[root@casino-vm1 ~]#
[root@casino-vm1 ~]# gluster v add-brick great 10.70.35.57:/rhs/brick4/g0 10.70.35.136:/rhs/brick4/g0
volume add-brick: success
[root@casino-vm1 ~]# gluster v rebalance great start force; gluster v rebalance great status


^C
[root@casino-vm1 ~]# gluster v rebalance great start force
volume rebalance: great: failed: Another transaction is in progress for great. Please try again after sometime.
[root@casino-vm1 ~]# gluster v rebalance great start force
volume rebalance: great: failed: Another transaction is in progress for great. Please try again after sometime.
[root@casino-vm1 ~]# 


[root@casino-vm1 ~]# less  /var/log/glusterfs/great-rebalance.log 
..skipping...
-client-7: forced unwinding frame type(GF-DUMP) op(DUMP(1)) called at 2015-07-19 10:03:26.409868 (xid=0x1)
[2015-07-19 10:03:28.614410] W [MSGID: 114032] [client-handshake.c:1623:client_dump_version_cbk] 0-great-client-7: received RPC status error [Transport endpoint is not connected]
[2015-07-19 10:03:28.614441] I [MSGID: 114018] [client.c:2042:client_rpc_notify] 0-great-client-7: disconnected from great-client-7. Client process will keep trying to connect to glusterd until brick's port is available
[2015-07-19 10:03:28.614472] W [MSGID: 109073] [dht-common.c:6815:dht_notify] 0-great-dht: Received CHILD_DOWN. Exiting
[2015-07-19 10:03:28.614511] I [MSGID: 114018] [client.c:2042:client_rpc_notify] 0-great-client-5: disconnected from great-client-5. Client process will keep trying to connect to glusterd until brick's port is available
[2015-07-19 10:03:28.614517] W [socket.c:642:__socket_rwv] 0-great-client-1: readv on 10.70.35.136:24007 failed (No data available)
[2015-07-19 10:03:28.614552] W [MSGID: 109073] [dht-common.c:6815:dht_notify] 0-great-dht: Received CHILD_DOWN. Exiting
[2015-07-19 10:03:28.614799] E [rpc-clnt.c:362:saved_frames_unwind] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7faf7fce0580] (--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7faf7faac167] (--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7faf7faac27e] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7faf7faac34b] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x15f)[0x7faf7faac90f] ))))) 0-great-client-1: forced unwinding frame type(Gluster Portmap) op(PORTBYBRICK(1)) called at 2015-07-19 10:03:26.383580 (xid=0x2)
[2015-07-19 10:03:28.614846] W [MSGID: 114032] [client-handshake.c:1506:client_query_portmap_cbk] 0-great-client-1: received RPC status error, try again later [Transport endpoint is not connected]
[2015-07-19 10:03:28.614879] I [MSGID: 114018] [client.c:2042:client_rpc_notify] 0-great-client-1: disconnected from great-client-1. Client process will keep trying to connect to glusterd until brick's port is available
[2015-07-19 10:03:28.614898] W [MSGID: 109073] [dht-common.c:6815:dht_notify] 0-great-dht: Received CHILD_DOWN. Exiting
[2015-07-19 10:03:28.622710] I [MSGID: 109063] [dht-layout.c:702:dht_layout_normalize] 0-great-dht: Found anomalies in / (gfid = 00000000-0000-0000-0000-000000000001). Holes=2 overlaps=0
[2015-07-19 10:03:28.622759] W [MSGID: 109005] [dht-selfheal.c:1782:dht_selfheal_directory] 0-great-dht: Directory selfheal failed: 4 subvolumes down.Not fixing. path = /, gfid = 00000000-0000-0000-0000-000000000001
[2015-07-19 10:03:28.622908] I [dht-rebalance.c:2652:gf_defrag_start_crawl] 0-great-dht: gf_defrag_start_crawl using commit hash 2908467018
[2015-07-19 10:03:28.623813] I [MSGID: 109081] [dht-common.c:3783:dht_setxattr] 0-great-dht: fixing the layout of /
[2015-07-19 10:03:28.623848] W [MSGID: 109016] [dht-selfheal.c:1474:dht_fix_layout_of_directory] 0-great-dht: Layout fix failed: 4 subvolume(s) are down. Skipping fix layout.
[2015-07-19 10:03:28.624132] E [MSGID: 109026] [dht-rebalance.c:2694:gf_defrag_start_crawl] 0-great-dht: fix layout on / failed
[2015-07-19 10:03:28.624174] I [MSGID: 109028] [dht-rebalance.c:3029:gf_defrag_status_get] 0-great-dht: Rebalance is failed. Time taken is 0.00 secs
[2015-07-19 10:03:28.624192] I [MSGID: 109028] [dht-rebalance.c:3033:gf_defrag_status_get] 0-great-dht: Files migrated: 0, size: 0, lookups: 0, failures: 1, skipped: 0
[2015-07-19 10:03:28.625060] W [glusterfsd.c:1219:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x374b207a51) [0x7faf7edc7a51] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xd5) [0x7faf801a6075] -->/usr/sbin/glusterfs(cleanup_and_exit+0x71) [0x7faf801a5ba1] ) 0-: received signum (15), shutting down
(END) 


[root@casino-vm1 ~]# gluster v info great
 
Volume Name: great
Type: Distribute
Volume ID: 2d3b4766-c99d-4113-8f0b-43116aeb931f
Status: Started
Number of Bricks: 8
Transport-type: tcp
Bricks:
Brick1: 10.70.35.57:/rhs/brick2/g0
Brick2: 10.70.35.136:/rhs/brick2/g0
Brick3: 10.70.35.57:/rhs/brick1/g0
Brick4: 10.70.35.136:/rhs/brick1/g0
Brick5: 10.70.35.57:/rhs/brick3/g0
Brick6: 10.70.35.136:/rhs/brick3/g0
Brick7: 10.70.35.57:/rhs/brick4/g0
Brick8: 10.70.35.136:/rhs/brick4/g0
Options Reconfigured:
features.uss: enable
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
cluster.min-free-disk: 10
performance.readdir-ahead: on
[root@casino-vm1 ~]# gluster v status great
Another transaction is in progress for great. Please try again after sometime.
[root@casino-vm1 ~]# 

[root@casino-vm1 ~]# gluster v add-brick great 10.70.35.57:/rhs/brick5/g0 10.70.35.136:/rhs/brick5/g0
volume add-brick: failed: Another transaction is in progress for great. Please try again after sometime.
[root@casino-vm1 ~]# 
[root@casino-vm1 ~]# 


Glusterd message:

[2015-07-19 10:03:40.013604] E [socket.c:2332:socket_connect_finish] 0-management: connection to 10.70.35.136:24007 failed (Connection refused)
The message "I [MSGID: 106004] [glusterd-handler.c:5051:__glusterd_peer_rpc_notify] 0-management: Peer <10.70.35.136> (<13bf706c-824a-4857-bad6-d928b5d11c4e>), in state <Peer in Cluster>, has disconnected from glusterd." repeated 22 times between [2015-07-19 10:03:28.629581] and [2015-07-19 10:04:42.111938]
[2015-07-19 10:04:48.315408] I [MSGID: 106163] [glusterd-handshake.c:1198:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30703
[2015-07-19 10:04:48.324162] I [MSGID: 106490] [glusterd-handler.c:2530:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 13bf706c-824a-4857-bad6-d928b5d11c4e
[2015-07-19 10:04:48.328611] I [MSGID: 106493] [glusterd-handler.c:3771:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 10.70.35.136 (0), ret: 0
[2015-07-19 10:04:48.403831] I [MSGID: 106492] [glusterd-handler.c:2706:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 13bf706c-824a-4857-bad6-d928b5d11c4e
[2015-07-19 10:04:48.403881] I [MSGID: 106502] [glusterd-handler.c:2751:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2015-07-19 10:04:48.403994] I [MSGID: 106493] [glusterd-rpc-ops.c:692:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 13bf706c-824a-4857-bad6-d928b5d11c4e
[2015-07-19 10:04:55.409542] I [MSGID: 106493] [glusterd-rpc-ops.c:478:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 13bf706c-824a-4857-bad6-d928b5d11c4e, host: 10.70.35.136, port: 0
[2015-07-19 10:04:55.414727] I [MSGID: 106492] [glusterd-handler.c:2706:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 13bf706c-824a-4857-bad6-d928b5d11c4e
[2015-07-19 10:04:55.414776] I [MSGID: 106502] [glusterd-handler.c:2751:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2015-07-19 10:04:55.425708] I [MSGID: 106493] [glusterd-rpc-ops.c:692:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 13bf706c-824a-4857-bad6-d928b5d11c4e
[2015-07-19 10:05:07.140768] W [glusterd-locks.c:575:glusterd_mgmt_v3_lock] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7f6736153580] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_lock+0x1d4)[0x7f672abf5d04] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_op_txn_begin+0x51a)[0x7f672ab5792a] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(__glusterd_handle_defrag_volume+0x27e)[0x7f672abbb47e] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_big_locked_handler+0x3f)[0x7f672ab3e81f] ))))) 0-management: Lock for great held by abbb8155-7e01-4fea-983e-2e2c929ebd7c
[2015-07-19 10:05:07.140826] E [MSGID: 106119] [glusterd-handler.c:719:glusterd_op_txn_begin] 0-management: Unable to acquire lock for great
[2015-07-19 10:07:46.796666] W [glusterd-locks.c:575:glusterd_mgmt_v3_lock] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7f6736153580] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_lock+0x1d4)[0x7f672abf5d04] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_op_txn_begin+0x51a)[0x7f672ab5792a] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(__glusterd_handle_defrag_volume+0x27e)[0x7f672abbb47e] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_big_locked_handler+0x3f)[0x7f672ab3e81f] ))))) 0-management: Lock for great held by abbb8155-7e01-4fea-983e-2e2c929ebd7c
[2015-07-19 10:07:46.796699] E [MSGID: 106119] [glusterd-handler.c:719:glusterd_op_txn_begin] 0-management: Unable to acquire lock for great
(END)

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-07-19 13:25:50 EDT ---

This bug is automatically being proposed for Red Hat Gluster Storage 3.1.0 by setting the release flag 'rhgs‑3.1.0' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from Triveni Rao on 2015-07-19 13:43:31 EDT ---

[root@casino-vm1 ~]# rpm -qa | grep gluster
gluster-nagios-addons-0.2.3-1.el6rhs.x86_64
glusterfs-api-3.7.1-10.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-10.el6rhs.x86_64
gluster-nagios-common-0.2.0-1.el6rhs.noarch
glusterfs-libs-3.7.1-10.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-10.el6rhs.x86_64
glusterfs-fuse-3.7.1-10.el6rhs.x86_64
glusterfs-server-3.7.1-10.el6rhs.x86_64
glusterfs-rdma-3.7.1-10.el6rhs.x86_64
vdsm-gluster-4.16.20-1.1.el6rhs.noarch
glusterfs-3.7.1-10.el6rhs.x86_64
glusterfs-cli-3.7.1-10.el6rhs.x86_64
[root@casino-vm1 ~]# 


sosreport uploaded

http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1244527/sosreport-casino-vm1.lab.eng.blr.redhat.com.003-20150719063653.tar

--- Additional comment from Triveni Rao on 2015-07-19 13:49:25 EDT ---

replication options
[2015-07-19 10:35:55.178461] E [MSGID: 106119] [glusterd-syncop.c:1819:gd_sync_task_begin] 0-management: Unable to acquire lock for great
The message "I [MSGID: 106488] [glusterd-handler.c:1463:__glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req" repeated 2 times between [2015-07-19 10:36:55.325248] and [2015-07-19 10:36:55.328233]
[2015-07-19 10:37:45.643073] I [MSGID: 106487] [glusterd-handler.c:1402:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
[2015-07-19 10:37:45.766638] I [MSGID: 106062] [glusterd-geo-rep.c:309:__glusterd_handle_gsync_set] 0-management: slave not found, while handling geo-replication options
[2015-07-19 10:37:45.767872] W [glusterd-locks.c:575:glusterd_mgmt_v3_lock] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7f6736153580] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_lock+0x1d4)[0x7f672abf5d04] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(gd_sync_task_begin+0x915)[0x7f672abf1ec5] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_op_begin_synctask+0x3b)[0x7f672abf1f8b] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(__glusterd_handle_gsync_set+0x16f)[0x7f672abd083f] ))))) 0-management: Lock for great held by abbb8155-7e01-4fea-983e-2e2c929ebd7c
[2015-07-19 10:37:45.767895] E [MSGID: 106119] [glusterd-syncop.c:1819:gd_sync_task_begin] 0-management: Unable to acquire lock for great
[2015-07-19 10:37:46.020277] I [MSGID: 106499] [glusterd-handler.c:4258:__glusterd_handle_status_volume] 0-management: Received status volume req for volume great
[2015-07-19 10:37:46.020553] W [glusterd-locks.c:575:glusterd_mgmt_v3_lock] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7f6736153580] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_lock+0x1d4)[0x7f672abf5d04] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(gd_sync_task_begin+0x915)[0x7f672abf1ec5] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_op_begin_synctask+0x3b)[0x7f672abf1f8b] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(__glusterd_handle_status_volume+0x1b2)[0x7f672ab3dd42] ))))) 0-management: Lock for great held by abbb8155-7e01-4fea-983e-2e2c929ebd7c
[2015-07-19 10:37:46.022241] I [MSGID: 106499] [glusterd-handler.c:4258:__glusterd_handle_status_volume] 0-management: Received status volume req for volume test
[2015-07-19 10:37:45.881620] I [MSGID: 106062] [glusterd-geo-rep.c:309:__glusterd_handle_gsync_set] 0-management: slave not found, while handling geo-replication options
[2015-07-19 10:37:46.020575] E [MSGID: 106119] [glusterd-syncop.c:1819:gd_sync_task_begin] 0-management: Unable to acquire lock for great
[2015-07-19 10:48:38.245354] W [glusterd-locks.c:575:glusterd_mgmt_v3_lock] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7f6736153580] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_lock+0x1d4)[0x7f672abf5d04] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_op_txn_begin+0x51a)[0x7f672ab5792a] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(__glusterd_handle_defrag_volume+0x27e)[0x7f672abbb47e] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_big_locked_handler+0x3f)[0x7f672ab3e81f] ))))) 0-management: Lock for great held by abbb8155-7e01-4fea-983e-2e2c929ebd7c
[2015-07-19 10:48:38.245561] E [MSGID: 106119] [glusterd-handler.c:719:glusterd_op_txn_begin] 0-management: Unable to acquire lock for great
(END)

--- Additional comment from Rejy M Cyriac on 2015-07-20 07:37:34 EDT ---

Moving to 3.1.z as per decision at RHGS 3.1 Blocker BZ Status Check meeting

--- Additional comment from Anand Nekkunti on 2015-07-21 02:34:30 EDT ---

RCA: when one of the glusterd went down during rebalance start then call back function (_glusterd_commit_op_cbk ) called with rpc_status is -1, In case of rpc success we are getting txn_id from response but in failure case of rpc, we are referring global_txn_id which is always Zero, this resulting op_sm into inconsistent state.

--- Additional comment from Anand Avati on 2015-07-21 06:19:47 EDT ---

REVIEW: http://review.gluster.org/11728 (glusrterd:getting txn_if from my_frame in op_sm call back RCA: when one of the glusterd went down during rebalance start then call back function (_glusterd_commit_op_cbk ) called with rpc_status is -1, In case of rpc success we are getting txn_id from response but in failure case of rpc, we are referring global_txn_id which is always Zero,this resulting op_sm into inconsistent  state.) posted (#1) for review on master by Anand Nekkunti (anekkunt@redhat.com)

--- Additional comment from Anand Avati on 2015-07-21 06:22:31 EDT ---

REVIEW: http://review.gluster.org/11728 (glusrterd: getting txn_id from myframe in op_sm call back. RCA: when one of the glusterd went down during rebalance start then call back function (_glusterd_commit_op_cbk ) called with rpc_status is -1, In case of rpc success we are getting txn_id from response but in failure case of rpc, we are referring global_txn_id which is always Zero,this resulting op_sm into inconsistent  state.) posted (#2) for review on master by Anand Nekkunti (anekkunt@redhat.com)

--- Additional comment from Anand Avati on 2015-07-21 14:41:38 EDT ---

REVIEW: http://review.gluster.org/11728 (glusterd: getting txn_id from frame->cookie in op_sm call back) posted (#3) for review on master by Anand Nekkunti (anekkunt@redhat.com)

--- Additional comment from Anand Avati on 2015-07-23 11:29:11 EDT ---

REVIEW: http://review.gluster.org/11728 (glusterd: getting txn_id from frame->cookie in op_sm call back) posted (#5) for review on master by Anand Nekkunti (anekkunt@redhat.com)
Comment 1 Anand Avati 2015-08-04 03:30:20 EDT
REVIEW: http://review.gluster.org/11823 (glusterd: getting txn_id from frame->cookie in op_sm call back) posted (#1) for review on release-3.7 by Anand Nekkunti (anekkunt@redhat.com)
Comment 2 Anand Avati 2015-08-06 08:23:06 EDT
REVIEW: http://review.gluster.org/11823 (glusterd: getting txn_id from frame->cookie in op_sm call back) posted (#2) for review on release-3.7 by Anand Nekkunti (anekkunt@redhat.com)
Comment 3 Kaushal 2015-09-09 05:38:55 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.4, please open a new bug report.

glusterfs-3.7.4 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12496
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.