Description of problem: While taking snapshot using scheduler one of the brick process crashed. Version-Release number of selected component (if applicable): mainline How reproducible: 1/1 Steps to Reproduce: 1. Create 2*2 distributed replicate volume 2. enabled scheduler, 3. scheduled snapshot every one minute Actual results: One of the brick process crashed Additional info: bt ======================= #0 0x00007f19a2a12394 in glusterfs_handle_barrier (req=0x7f19a30cffcc) at glusterfsd-mgmt.c:1348 ret = <optimized out> brick_req = {name = 0x7f198c0008e0 "repvol", op = 10, input = {input_len = 1783, input_val = 0x7f198c000900 ""}} brick_rsp = {op_ret = 0, op_errno = 0, output = {output_len = 0, output_val = 0x0}, op_errstr = 0x0} ctx = 0x7f19a3085010 active = 0x0 any = 0x0 xlator = 0x0 old_THIS = 0x0 dict = 0x0 name = '\000' <repeats 1023 times> barrier = _gf_true barrier_err = _gf_false __FUNCTION__ = "glusterfs_handle_barrier" #1 0x00007f19a2550a92 in synctask_wrap (old_task=<optimized out>) at syncop.c:375 task = 0x7f1990002510 #2 0x00007f19a0c0fcf0 in ?? () from /lib64/libc.so.6 No symbol table info available. #3 0x0000000000000000 in ?? () No symbol table info available. RCA: The function from where this core was generated is glusterfs_handle_barrier (). From the core it looks like glusterfsd_ctx (global context) in the brick process didn't have ctx->active initialized which happens during graph initialization. We also saw that when barrier brick op was sent by GlusterD brick process just came up. The hypothesis we have here is as follows: T1. Brick process was in its init. However it still didn't finish doing the graph generation. T2. GlusterD sent a barrier brick op (as a trigger to snapshot initiated by snapshot scheduler) as it understood the brick to be connected (received the rpc connect notify from brick process) The time gap between T1 & T2 is very minimum and currently GlusterD doesn't know whether the brick process has finished all its initialization including the graph generation. One mitigation approach to avoid this crash is to avoid null pointer dereferencing which can be addressed by a simple patch and then even if we hit this race, barrier would fail. But to fix this race entirely we need to come up with a concrete solution which may not be feasible in 3.2.0 time lines.
REVIEW: http://review.gluster.org/16043 (glusterfsd : fix null pointer dereference in glusterfs_handle_barrier) posted (#1) for review on master by Atin Mukherjee (amukherj)
REVIEW: http://review.gluster.org/16043 (glusterfsd : fix null pointer dereference in glusterfs_handle_barrier) posted (#2) for review on master by Atin Mukherjee (amukherj)
COMMIT: http://review.gluster.org/16043 committed in master by Vijay Bellur (vbellur) ------ commit 369c619f946f9ec1cf86cc83a7dcb11c29f1f0c7 Author: Atin Mukherjee <amukherj> Date: Tue Dec 6 16:21:41 2016 +0530 glusterfsd : fix null pointer dereference in glusterfs_handle_barrier Change-Id: Iab86a3c4970e54c22d3170e68708e0ea432a8ea4 BUG: 1401921 Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: http://review.gluster.org/16043 Smoke: Gluster Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Vijay Bellur <vbellur>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report. glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html [2] https://www.gluster.org/pipermail/gluster-users/