Bug 997576 - glusterd becomes unresponsive when acting as mountbroker
glusterd becomes unresponsive when acting as mountbroker
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
3.4.0
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: krishnan parthasarathi
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-08-15 12:14 EDT by Joe Julian
Modified: 2015-11-03 18:05 EST (History)
2 users (show)

See Also:
Fixed In Version: glusterfs-3.5.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-04-17 07:45:53 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Joe Julian 2013-08-15 12:14:56 EDT
Description of problem:
glusterd becomes unresponsive when acting as a mountbroker. During the mount process, it spawns the glusterfs process but the glusterfs process times out because glusterd will not ACK its SYN request causing connect() to timeout.

Version-Release number of selected component (if applicable):
3.4.0-8

How reproducible:
Always

Steps to Reproduce:
1. install glusterfs-server on two machines
2. On the slave
2a.  # groupadd geogroup
2b.  # useradd  geoaccount -g geogroup
2c.  # patch -p1 <EOF
--- a/etc/glusterfs/glusterd.vol	2013-08-15 09:01:42.000000000 -0700
+++ b/etc/glusterfs/glusterd.vol	2013-08-15 09:01:23.000000000 -0700
@@ -5,4 +5,8 @@
     option transport.socket.keepalive-time 10
     option transport.socket.keepalive-interval 2
     option transport.socket.read-fail-log off
+
+    option mountbroker-root /mnt/mountbroker-root 
+    option mountbroker-geo-replication.geoaccount myvol
+    option geo-replication-log-group geogroup
 end-volume
EOF

3. Start glusterd on both servers
4. Create volumes on both servers:
  server1# gluster volume create mastervol server1:/tmp/myvol/brick
  server1# gluster volume start mastervol
  server2# gluster volume create slavevol server2:/tmp/myvol/brick
  server2# gluster volume start slavevol

5. create an ssh key on server1 and add the public key to server2's authorized_keys for geoaccount
6. Start geo-replication:
  server1# gluster volume geo-replication mastervol geoaccount@server2::slavevol start

Actual results:
A tcpdump on server2 for 24007 on localhost will show that the client will repeatedly try sending SYN packets which are never ACKed. Attempted use of the CLI will appear to hang (I wasn't patent enough to see if that would timeout as well, but I expect it would after the default 120 seconds).

Expected results:
glusterd would remain responsive and would accept the connection from glusterfs so it can continue the mount.

Additional info:
Reported on IRC by lurbs <user@uber.geek.nz> and reproduced by me. This problem only occurs when acting as a mount broker. When geo-replication is done as root this problem does not occur.
Comment 1 Joe Julian 2013-08-15 16:55:58 EDT
(gdb) thread apply all bt

Thread 6 (Thread 0x7febf1d99700 (LWP 30369)):
#0  do_sigwait (sig=0x7febf1d98e0c, set=<optimized out>) at ../sysdeps/unix/sysv/linux/sigwait.c:64
#1  __sigwait (set=set@entry=0x7febf1d98e10, sig=sig@entry=0x7febf1d98e0c)
    at ../sysdeps/unix/sysv/linux/sigwait.c:99
#2  0x00007febf4585e5f in glusterfs_sigwaiter (arg=<optimized out>) at glusterfsd.c:1604
#3  0x00007febf3894d15 in start_thread (arg=0x7febf1d99700) at pthread_create.c:308
#4  0x00007febf320253d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:114

Thread 5 (Thread 0x7febf1598700 (LWP 30370)):
#0  0x00007febf389bba7 in __libc_waitpid (pid=<optimized out>, stat_loc=stat_loc@entry=0x7febf613791c, 
    options=options@entry=0) at ../sysdeps/unix/sysv/linux/waitpid.c:40
#1  0x00007febf4131026 in runner_end_reuse (runner=runner@entry=0x7febf6137a20) at run.c:345
#2  0x00007febf4131bd0 in runner_run_generic (rfin=0x7febf4131000 <runner_end_reuse>, runner=0x7febf6137a20, 
    runner@entry=0xffffffff) at run.c:386
#3  runner_run_reuse (runner=runner@entry=0x7febf6137a20) at run.c:417
#4  0x00007febf0363f9c in glusterd_do_mount (label=0x7febe800d530 "geoaccount", argdict=0x7febf1f7c7d4, 
    path=path@entry=0x7febf6137b48, op_errno=op_errno@entry=0x7febf6137b44) at glusterd-mountbroker.c:661
#5  0x00007febf02fd251 in __glusterd_handle_mount (req=req@entry=0x7febee5fb920) at glusterd-handler.c:2468
#6  0x00007febf0309f4f in glusterd_big_locked_handler (req=0x7febee5fb920, 
    actor_fn=0x7febf02fd190 <__glusterd_handle_mount>) at glusterd-handler.c:75
#7  0x00007febf4121c52 in synctask_wrap (old_task=<optimized out>) at syncop.c:131
#8  0x00007febf3156770 in ?? () from /lib64/libc.so.6
#9  0x0000000000000000 in ?? ()

Thread 4 (Thread 0x7febf0d97700 (LWP 30371)):
#0  pthread_cond_timedwait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:217
#1  0x00007febf4123433 in syncenv_task (proc=proc@entry=0x7febf5f161f0) at syncop.c:306
#2  0x00007febf4123870 in syncenv_processor (thdata=0x7febf5f161f0) at syncop.c:384
#3  0x00007febf3894d15 in start_thread (arg=0x7febf0d97700) at pthread_create.c:308
#4  0x00007febf320253d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:114

Thread 3 (Thread 0x7febecc72700 (LWP 30391)):
#0  0x00007febf389b86d in nanosleep () at ../sysdeps/unix/syscall-template.S:81
#1  0x00007febf4103504 in gf_timer_proc (ctx=0x7febf5ef9010) at timer.c:173
#2  0x00007febf3894d15 in start_thread (arg=0x7febecc72700) at pthread_create.c:308
#3  0x00007febf320253d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:114

Thread 2 (Thread 0x7febe7fff700 (LWP 30394)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:165
#1  0x00007febf0367483 in hooks_worker (args=<optimized out>) at glusterd-hooks.c:451
#2  0x00007febf3894d15 in start_thread (arg=0x7febe7fff700) at pthread_create.c:308
#3  0x00007febf320253d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:114

Thread 1 (Thread 0x7febf4550740 (LWP 30368)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:165
#1  0x00007febf4121c2b in __synclock_lock (lock=lock@entry=0x7febf5f2b778) at syncop.c:527
#2  0x00007febf41241be in synclock_lock (lock=lock@entry=0x7febf5f2b778) at syncop.c:545
#3  0x00007febf0309f49 in glusterd_big_locked_handler (req=0x7febee5fa02c, 
    actor_fn=0x7febf033e780 <__server_getspec>) at glusterd-handler.c:74
#4  0x00007febf3ecad3d in rpcsvc_handle_rpc_call (svc=<optimized out>, trans=trans@entry=0x7febf5f30a20, 
    msg=msg@entry=0x7febf5f2e320) at rpcsvc.c:549
#5  0x00007febf3ecb1d3 in rpcsvc_notify (trans=0x7febf5f30a20, mydata=<optimized out>, event=<optimized out>, 
    data=0x7febf5f2e320) at rpcsvc.c:643
#6  0x00007febf3ece563 in rpc_transport_notify (this=this@entry=0x7febf5f30a20, 
    event=event@entry=RPC_TRANSPORT_MSG_RECEIVED, data=data@entry=0x7febf5f2e320) at rpc-transport.c:497
#7  0x00007febee3eec09 in socket_event_poll_in (this=this@entry=0x7febf5f30a20) at socket.c:2118
#8  0x00007febee3f1a1c in socket_event_handler (fd=<optimized out>, idx=<optimized out>, data=0x7febf5f30a20, 
    poll_in=1, poll_out=0, poll_err=0) at socket.c:2230
#9  0x00007febf413a58a in event_dispatch_epoll_handler (i=<optimized out>, events=0x7febf5f35b30, 
    event_pool=0x7febf5f15520) at event-epoll.c:384
#10 event_dispatch_epoll (event_pool=0x7febf5f15520) at event-epoll.c:445
#11 0x00007febf4583965 in main (argc=2, argv=0x7fff2cc2ae28) at glusterfsd.c:1934
Comment 2 Joe Julian 2013-08-15 17:01:17 EDT
(gdb) thread apply all bt full

Thread 6 (Thread 0x7febca322700 (LWP 8114)):
#0  0x00007febcbe2bd41 in sigwait () from /lib64/libpthread.so.0
No symbol table info available.
#1  0x00007febccb15ecb in glusterfs_sigwaiter (arg=<optimized out>) at glusterfsd.c:1604
        set = {__val = {18947, 0 <repeats 15 times>}}
        ret = <optimized out>
        sig = 0
#2  0x00007febcbe24c53 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3  0x00007febcb78e13d in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 5 (Thread 0x7febc9b21700 (LWP 8115)):
#0  0x00007febcbe28922 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
No symbol table info available.
#1  0x00007febcc6b036f in syncenv_task (proc=proc@entry=0x7febcd7a91e0) at syncop.c:306
        env = 0x7febcd7a91e0
        task = 0x0
        sleep_till = {tv_sec = 1376600656, tv_nsec = 0}
        ret = <optimized out>
#2  0x00007febcc6b0f10 in syncenv_processor (thdata=0x7febcd7a91e0) at syncop.c:384
        env = 0x7febcd7a91e0
        proc = 0x7febcd7a91e0
        task = <optimized out>
#3  0x00007febcbe24c53 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4  0x00007febcb78e13d in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 4 (Thread 0x7febc9320700 (LWP 8116)):
#0  0x00007febcbe28922 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
No symbol table info available.
#1  0x00007febcc6b036f in syncenv_task (proc=proc@entry=0x7febcd7a95a0) at syncop.c:306
        env = 0x7febcd7a91e0
        task = 0x0
        sleep_till = {tv_sec = 1376600661, tv_nsec = 0}
        ret = <optimized out>
#2  0x00007febcc6b0f10 in syncenv_processor (thdata=0x7febcd7a95a0) at syncop.c:384
        env = 0x7febcd7a91e0
        proc = 0x7febcd7a95a0
        task = <optimized out>
#3  0x00007febcbe24c53 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4  0x00007febcb78e13d in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 3 (Thread 0x7febc4f9d700 (LWP 8136)):
#0  0x00007febcbe2b80d in nanosleep () from /lib64/libpthread.so.0
No symbol table info available.
#1  0x00007febcc691e04 in gf_timer_proc (ctx=0x7febcd78c010) at timer.c:173
        now = 1376600443729011
        now_tv = {tv_sec = 1376600443, tv_usec = 729011}
        event = 0x7febb8000920
        reg = 0x7febc000b2a0
        sleepts = {tv_sec = 1, tv_nsec = 0}
        __FUNCTION__ = "gf_timer_proc"
#2  0x00007febcbe24c53 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3  0x00007febcb78e13d in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 2 (Thread 0x7febbffff700 (LWP 8137)):
---Type <return> to continue, or q <return> to quit---
#0  0x00007febcbe28575 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
No symbol table info available.
#1  0x00007febc88ecc63 in hooks_worker (args=<optimized out>) at glusterd-hooks.c:451
        conf = <optimized out>
        hooks_priv = 0x7febcd7bd520
        stub = <optimized out>
#2  0x00007febcbe24c53 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3  0x00007febcb78e13d in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 1 (Thread 0x7febccaca740 (LWP 8113)):
#0  0x00007febcb78e713 in epoll_wait () from /lib64/libc.so.6
No symbol table info available.
#1  0x00007febcc6c6d07 in event_dispatch_epoll (event_pool=0x7febcd7a8520) at event-epoll.c:428
        events = 0x7febcd7bf4f0
        i = <optimized out>
        ret = -1
        __FUNCTION__ = "event_dispatch_epoll"
#2  0x00007febccb13906 in main (argc=3, argv=0x7fff0a13d048) at glusterfsd.c:1934
        ctx = 0x7febcd78c010
        ret = 0
        cmdlinestr = "/usr/sbin/glusterd -p /run/glusterd.pid", '\000' <repeats 4056 times>
        __FUNCTION__ = "main"
Comment 3 Anand Avati 2013-08-15 17:11:44 EDT
REVIEW: http://review.gluster.org/5636 (glusterd: release big locks while doing mount) posted (#1) for review on master by Anand Avati (avati@redhat.com)
Comment 4 Anand Avati 2013-08-19 01:12:30 EDT
COMMIT: http://review.gluster.org/5636 committed in master by Anand Avati (avati@redhat.com) 
------
commit 3c9f3e642fd3ea81b9371eb1eec946882b1cb746
Author: Anand Avati <avati@redhat.com>
Date:   Thu Aug 15 13:53:52 2013 -0700

    glusterd: release big locks while doing mount
    
    Else things can deadlock in getspec v/s glusterd_do_mount()
    
    Change-Id: Ie70b43916e495c1c8f93e4ed0836c2fb7b0e1f1d
    BUG: 997576
    Signed-off-by: Anand Avati <avati@redhat.com>
    Reviewed-on: http://review.gluster.org/5636
    Tested-by: Joe Julian <joe@julianfamily.org>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Amar Tumballi <amarts@redhat.com>
    Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
Comment 5 Anand Avati 2013-09-09 20:55:10 EDT
REVIEW: http://review.gluster.org/5881 (glusterd: release big locks while doing mount) posted (#1) for review on release-3.4 by Anand Avati (avati@redhat.com)
Comment 6 Anand Avati 2013-09-09 21:09:33 EDT
REVIEW: http://review.gluster.org/5881 (glusterd: release big locks while doing mount) posted (#2) for review on release-3.4 by Anand Avati (avati@redhat.com)
Comment 7 Anand Avati 2013-09-10 04:24:01 EDT
COMMIT: http://review.gluster.org/5881 committed in release-3.4 by Vijay Bellur (vbellur@redhat.com) 
------
commit d20878ca40a59c39fc0e87a31c780c751bd1d8f1
Author: Anand Avati <avati@redhat.com>
Date:   Thu Aug 15 13:53:52 2013 -0700

    glusterd: release big locks while doing mount
    
    Else things can deadlock in getspec v/s glusterd_do_mount()
    
    Change-Id: Ie70b43916e495c1c8f93e4ed0836c2fb7b0e1f1d
    BUG: 997576
    Signed-off-by: Anand Avati <avati@redhat.com>
    Reviewed-on: http://review.gluster.org/5881
    Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Comment 8 Niels de Vos 2014-04-17 07:45:53 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.