Red Hat Bugzilla – Bug 1248415
rebalance stuck at 0 byte when auth.allow is set
Last modified: 2016-06-16 09:28:07 EDT
+++ This bug was initially created as a clone of Bug #1213893 +++
Description of problem:
When setting auth.allow, rebalance will get stuck unless the IPs of the gluster nodes themselves are included.
The rebalance will be kept as 'in progress', but will be kept at 0 Byte.
Node Rebalanced-files size scanned failures skipped status run time in secs
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
localhost 0 0Bytes 0 0 0 in progress 0.00
rhs30-node3 0 0Bytes 0 0 0 in progress 0.00
rhs30-node4 0 0Bytes 0 0 0 in progress 0.00
192.168.100.206 0 0Bytes 0 0 0 in progress 0.00
volume rebalance: thingluster: success:
On the bricks logs, we can see the authentication being prevented :
[2015-04-21 13:43:03.131329] E [server-handshake.c:589:server_setvolume] 0-thingluster-server: Cannot authenticate client from cbuissar-rhs30-node1-8521-2015/04/21-13:42:58:108057-thingluster-client-0-0-0 220.127.116.11
[2015-04-21 13:43:08.419405] E [authenticate.c:239:gf_authenticate] 0-auth: no authentication module is interested in accepting remote-client (null)
Version-Release number of selected component (if applicable): tested on 3.0u3 and 3.0u4
How reproducible: 100%/easy
Steps to Reproduce:
1. set auth.allow to some client IP
2. mount and move files
3. start rebalance
rebalance is hung, authentication errors are shown in the brick logs
Rebalance should still work if we restrict auth.allow.
Workaround : add all the IPs of the gluster nodes in auth.allow.
--- Additional comment from Cedric Buissart on 2015-04-21 09:55:45 EDT ---
And the rebalance-<volume>.log :
[2015-04-21 13:43:08.412805] W [client-handshake.c:1108:client_setvolume_cbk] 0-thingluster-client-3: failed to set the volume (Permission denied)
[2015-04-21 13:43:08.412821] W [client-handshake.c:1134:client_setvolume_cbk] 0-thingluster-client-3: failed to get 'process-uuid' from reply dict
[2015-04-21 13:43:08.412828] E [client-handshake.c:1140:client_setvolume_cbk] 0-thingluster-client-3: SETVOLUME on remote-host failed: Authentication failed
[2015-04-21 13:43:08.412834] I [client-handshake.c:1225:client_setvolume_cbk] 0-thingluster-client-3: sending AUTH_FAILED even
REVIEW: http://review.gluster.org/11803 (glusterd/rebalance: create rebalance volfile) posted (#1) for review on master by N Balachandran (email@example.com)
REVIEW: http://review.gluster.org/11819 (glusterd/rebalance: trusted rebalance volfile) posted (#1) for review on master by N Balachandran (firstname.lastname@example.org)
REVIEW: http://review.gluster.org/11819 (glusterd/rebalance: trusted rebalance volfile) posted (#2) for review on master by N Balachandran (email@example.com)
COMMIT: http://review.gluster.org/11819 committed in master by Atin Mukherjee (firstname.lastname@example.org)
Author: N Balachandran <email@example.com>
Date: Mon Aug 3 13:57:37 2015 +0530
glusterd/rebalance: trusted rebalance volfile
Creating the client volfiles with GF_CLIENT_OTHER
overwrites the trusted rebalance volfile and causes rebalance
to fail if auth.allow is set.
Now, we always set the value of trusted-client to GF_CLIENT_TRUSTED
for rebalance volfiles.
Signed-off-by: N Balachandran <firstname.lastname@example.org>
Tested-by: Gluster Build System <email@example.com>
Tested-by: NetBSD Build System <firstname.lastname@example.org>
Reviewed-by: Avra Sengupta <email@example.com>
Reviewed-by: Rajesh Joseph <firstname.lastname@example.org>
Reviewed-by: Atin Mukherjee <email@example.com>
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.
glusterfs-3.8.0 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.