+++ This bug was initially created as a clone of Bug #1243722 +++ Description of problem: I was running rebalance automation and I had a gluster volume with SSL network encryption enabled in both data and management path. But when a client with a gluster version which does not support SSL tries to mount the this gluster volume, the glusterd in the volfile-server crashed. Version-Release number of selected component (if applicable): gluster server: glusterfs-3.7.1-10.el6rhs.x86_64 gluster client: glusterfs-3.6.0.53-1.el6.x86_64 How reproducible: Hit only once in as many tries Steps to Reproduce: 1. Run rebalance regression tests with SSL enabled with old client version Actual results: glusterd crashed with below backtrace #0 list_del (rpc=<value optimized out>, xl=0x7fb94bfe1050, event=<value optimized out>, data=0x7fb92c000be0) at ../../../../libglusterfs/src/list.h:76 #1 glusterd_rpcsvc_notify (rpc=<value optimized out>, xl=0x7fb94bfe1050, event=<value optimized out>, data=0x7fb92c000be0) at glusterd.c:347 #2 0x00007fb94a9df665 in rpcsvc_handle_disconnect (svc=0x7fb94bfea380, trans=0x7fb92c000be0) at rpcsvc.c:754 #3 0x00007fb94a9e11c0 in rpcsvc_notify (trans=0x7fb92c000be0, mydata=<value optimized out>, event=<value optimized out>, data=0x7fb92c000be0) at rpcsvc.c:792 #4 0x00007fb94a9e2ad8 in rpc_transport_notify (this=<value optimized out>, event=<value optimized out>, data=<value optimized out>) at rpc-transport.c:543 #5 0x00007fb93dca9ba3 in socket_poller (ctx=0x7fb92c000be0) at socket.c:2582 #6 0x00007fb949d02a51 in start_thread () from /lib64/libpthread.so.0 #7 0x00007fb94966c96d in clone () from /lib64/libc.so.6 Expected results: There should be no glusterd crash even if a client which does not support the SSL tries to mount the SSL enabled volume. Additional info: I will upload the sosreport from the crashed machine.
Change has been posted for review at http://review.gluster.org/11692
REVIEW: http://review.gluster.org/11692 (rpc,server,glusterd: Init transport list for accepted transport) posted (#2) for review on master by Kaushal M (kaushal)
REVIEW: http://review.gluster.org/11692 (rpc,server,glusterd: Init transport list for accepted transport) posted (#3) for review on master by Kaushal M (kaushal)
COMMIT: http://review.gluster.org/11692 committed in master by Raghavendra G (rgowdapp) ------ commit a909ccfa1b4cbf656c4608ef2124347851c492cb Author: Kaushal M <kaushal> Date: Thu Jul 16 14:52:36 2015 +0530 rpc,server,glusterd: Init transport list for accepted transport GlusterD or a brick would crash when encrypted transport was enabled and an unencrypted client tried to connect to them. The crash occured when GlusterD/server tried to remove the transport from their xprt_list due to a DISCONNECT event. But as the client transport's list head wasn't inited, the process would crash when list_del was performed. Initing the client transports list head during acceptence, prevents this crash. Also, an extra check has been added to the GlusterD and Server notification handlers for client DISCONNECT events. The handlers will now first check if the client transport is a member of any list. GlusterD and Server DISCONNECT event handlers could be called without the ACCEPT handler, which adds the transport to the list, being called. This situation also occurs when an unencrypted client tries to establish a connection with an encrypted server. Change-Id: Icc24a08d60e978aaa1d3322e0cbed680dcbda2b4 BUG: 1243774 Signed-off-by: Kaushal M <kaushal> Reviewed-on: http://review.gluster.org/11692 Tested-by: Gluster Build System <jenkins.com> Tested-by: NetBSD Build System <jenkins.org> Reviewed-by: Raghavendra G <rgowdapp>
REVIEW: http://review.gluster.org/11762 (rpc,server,glusterd: Init transport list for accepted transport) posted (#1) for review on release-3.7 by Kaushal M (kaushal)
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user