Description of problem: On build https://build.gluster.org/job/regression-test-with-multiplex/786/console test case ./tests/bugs/core/bug-1432542-mpx-restart-crash.t is crashing. Version-Release number of selected component (if applicable): How reproducible: Test case tests/bugs/core/bug-1432542-mpx-restart-crash.t is generating crash while running in a loop Steps to Reproduce: 1. 2. 3. Actual results: Test case is crashing. Expected results: Test case should not crash Additional info:
Hi, Below is the bt pattern for generated crash ...... ....... ....... 22:04:50 #0 0x00007f86690c27fc in quota_lookup (frame=0x7f8624062fe8, this=0x7f864421f580, loc=0x7f85d87688d0, xattr_req=0x0) at /home/jenkins/root/workspace/centos7-regression/xlators/features/quota/src/quota.c:1663 22:04:50 priv = 0x0 22:04:50 ret = -1 22:04:50 local = 0x0 22:04:50 __FUNCTION__ = "quota_lookup" 22:04:50 #1 0x00007f8668e9e23b in io_stats_lookup (frame=0x7f8624062eb8, this=0x7f8644220e00, loc=0x7f85d87688d0, xdata=0x0) at /home/jenkins/root/workspace/centos7-regression/xlators/debug/io-stats/src/io-stats.c:2784 22:04:50 _new = 0x7f8624062fe8 22:04:50 old_THIS = 0x7f8644220e00 22:04:50 next_xl_fn = 0x7f86690c27a7 <quota_lookup> 22:04:50 tmp_cbk = 0x7f8668e920f2 <io_stats_lookup_cbk> 22:04:50 __FUNCTION__ = "io_stats_lookup" 22:04:50 #2 0x00007f867ccd6862 in default_lookup (frame=0x7f8624062eb8, this=0x7f8644222ab0, loc=0x7f85d87688d0, xdata=0x0) at defaults.c:2714 22:04:50 old_THIS = 0x7f8644222ab0 22:04:50 next_xl = 0x7f8644220e00 22:04:50 next_xl_fn = 0x7f8668e9de1e <io_stats_lookup> 22:04:50 opn = 27 22:04:50 __FUNCTION__ = "default_lookup" 22:04:50 #3 0x00007f867cc52d36 in syncop_lookup (subvol=0x7f8644222ab0, loc=0x7f85d87688d0, iatt=0x7f85d8768830, parent=0x0, xdata_in=0x0, xdata_out=0x0) at /home/jenkins/root/workspace/centos7-regression/libglusterfs/src/syncop.c:1260 22:04:50 _new = 0x7f8624062eb8 22:04:50 old_THIS = 0x7f866c033d60 22:04:50 next_xl_fn = 0x7f867ccd6674 <default_lookup> 22:04:50 tmp_cbk = 0x7f867cc52778 <syncop_lookup_cbk> 22:04:50 task = 0x0 22:04:50 frame = 0x7f8624062d88 22:04:50 args = {op_ret = 0, op_errno = 0, iatt1 = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}, iatt2 = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}, xattr = 0x0, statvfs_buf = {f_bsize = 0, f_frsize = 0, f_blocks = 0, f_bfree = 0, f_bavail = 0, f_files = 0, f_ffree = 0, f_favail = 0, f_fsid = 0, f_flag = 0, f_namemax = 0, __f_spare = {0, 0, 0, 0, 0, 0}}, vector = 0x0, count = 0, iobref = 0x0, buffer = 0x0, xdata = 0x0, flock = {l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = {len = 0, data = '\000' <repeats 1023 times>}}, lease = {cmd = 0, lease_type = NONE, lease_id = '\000' <repeats 15 times>, lease_flags = 0}, dict_out = 0x0, uuid = '\000' <repeats 15 times>, errstr = 0x0, dict = 0x0, lock_dict = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 39 times>, __align = 0}, barrier = {initialized = false, guard = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 39 times>, __align = 0}, cond = {__data = {__lock = 0, __futex = 0, __total_seq = 0, __wakeup_seq = 0, __woken_seq = 0, __mutex = 0x0, __nwaiters = 0, __broadcast_seq = 0}, __size = '\000' <repeats 47 times>, __align = 0}, waitq = {next = 0x0, prev = 0x0}, count = 0, waitfor = 0}, task = 0x0, mutex = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 39 times>, __align = 0}, cond = {__data = {__lock = 0, __futex = 0, __total_seq = 0, __wakeup_seq = 0, __woken_seq = 0, __mutex = 0x0, __nwaiters = 0, __broadcast_seq = 0}, __size = '\000' <repeats 47 times>, __align = 0}, done = 0, entries = {{list = {next = 0x0, prev = 0x0}, {next = 0x0, prev = 0x0}}, d_ino = 0, d_off = 0, d_len = 0, d_type = 0, d_stat = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}, dict = 0x0, inode = 0x0, d_name = 0x7f85d8768358 ""}, offset = 0, locklist = {list = {next = 0x0, prev = 0x0}, flock = {l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = {len = 0, data = '\000' <repeats 1023 times>}}, client_uid = 0x0, lk_flags = 0}} 22:04:50 __FUNCTION__ = "syncop_lookup" 22:04:50 #4 0x00007f8668a1a582 in server_first_lookup (this=0x7f866c033d60, client=0x7f8371896220, reply=0x7f8624fe0018) at /home/jenkins/root/workspace/centos7-regression/xlators/protocol/server/src/server-handshake.c:382 22:04:50 loc = {path = 0x7f8668a41a95 "/", name = 0x7f8668a41b9a "", inode = 0x7f86240625e8, parent = 0x0, gfid = '\000' <repeats 15 times>, "\001", pargfid = '\000' <repeats 15 times>} 22:04:50 iatt = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}} 22:04:50 dict = 0x0 22:04:50 ret = 0 22:04:50 xl = 0x7f8644222ab0 22:04:50 msg = 0x0 22:04:50 inode = 0x0 22:04:50 bname = 0x0 22:04:50 str = 0x0 22:04:50 tmp = 0x0 22:04:50 saveptr = 0x0 22:04:50 __FUNCTION__ = "server_first_lookup" 22:04:50 #5 0x00007f8668a1c0ed in server_setvolume (req=0x7f86249d4e28) at /home/jenkins/root/workspace/centos7-regression/xlators/protocol/server/src/server-handshake.c:886 22:04:50 args = {dict = {dict_len = 826, dict_val = 0x7f8370ba1d90 ""}} 22:04:50 rsp = 0x0 22:04:50 client = 0x7f8371896220 22:04:50 serv_ctx = 0x7f837178dce0 22:04:50 conf = 0x7f866c03b840 22:04:50 peerinfo = 0x7f8658432690 22:04:50 reply = 0x7f8624fe0018 22:04:50 config_params = 0x7f8370ba2108 22:04:50 params = 0x7f86249d8408 22:04:50 name = 0x7f8370ba4bf0 "/d/backends/vol02/brick0" 22:04:50 client_uid = 0x7f8370ba4260 "CTX_ID:7d34d570-1852-4243-b24f-9275a047f237-GRAPH_ID:0-PID:32369-HOST:builder106.cloud.gluster.org-PC_NAME:patchy-vol02-client-0-RECON_NO:-1" 22:04:50 clnt_version = 0x7f8370ba4020 "4.2dev" 22:04:50 xl = 0x7f8644222ab0 22:04:50 msg = 0x0 22:04:50 volfile_key = 0x7f8370ba3de0 "patchy-vol02" 22:04:50 this = 0x7f866c033d60 22:04:50 checksum = 0 22:04:50 ret = 0 22:04:50 op_ret = 0 22:04:50 op_errno = 22 22:04:50 buf = 0x0 22:04:50 opversion = 40200 22:04:50 xprt = 0x7f866c03b3d0 22:04:50 fop_version = 1298437 22:04:50 mgmt_version = 0 22:04:50 ctx = 0x18df010 22:04:50 tmp = 0x7f86447cf470 22:04:50 subdir_mount = 0x0 22:04:50 client_name = 0x7f8668a41e8b "unknown" 22:04:50 cleanup_starting = false 22:04:50 __FUNCTION__ = "server_setvolume" 22:04:50 __PRETTY_FUNCTION__ = "server_setvolume" 22:04:50 #6 0x00007f867c9c07e2 in rpcsvc_handle_rpc_call (svc=0x7f866c048e40, trans=0x7f86584325d0, msg=0x7f8370ba18e0) at /home/jenkins/root/workspace/centos7-regression/rpc/rpc-lib/src/rpcsvc.c:721 22:04:50 actor = 0x7f8668c548c0 <gluster_handshake_actors+64> 22:04:50 actor_fn = 0x7f8668a1a876 <server_setvolume> 22:04:50 req = 0x7f86249d4e28 22:04:50 ret = -1 22:04:50 port = 46465 22:04:50 is_unix = false 22:04:50 empty = false 22:04:50 unprivileged = true 22:04:50 reply = 0x0 22:04:50 drc = 0x0 22:04:50 __FUNCTION__ = "rpcsvc_handle_rpc_call" 22:04:50 #7 0x00007f867c9c0b35 in rpcsvc_notify (trans=0x7f86584325d0, mydata=0x7f866c048e40, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f8370ba18e0) at /home/jenkins/root/workspace/centos7-regression/rpc/rpc-lib/src/rpcsvc.c:815 22:04:50 ret = -1 22:04:50 msg = 0x7f8370ba18e0 22:04:50 new_trans = 0x0 22:04:50 svc = 0x7f866c048e40 22:04:50 listener = 0x0 22:04:50 __FUNCTION__ = "rpcsvc_notify" 22:04:50 #8 0x00007f867c9c673f in rpc_transport_notify (this=0x7f86584325d0, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f8370ba18e0) at /home/jenkins/root/workspace/centos7-regression/rpc/rpc-lib/src/rpc-transport.c:537 22:04:50 ret = -1 22:04:50 __FUNCTION__ = "rpc_transport_notify" 22:04:50 #9 0x00007f86722d7ed8 in socket_event_poll_in (this=0x7f86584325d0, notify_handled=true) at /home/jenkins/root/workspace/centos7-regression/rpc/rpc-transport/socket/src/socket.c:2462 22:04:50 ret = 0 22:04:50 pollin = 0x7f8370ba18e0 22:04:50 priv = 0x7f8658432b30 22:04:50 ctx = 0x18df010 22:04:50 #10 0x00007f86722d8546 in socket_event_handler (fd=550, idx=299, gen=1, data=0x7f86584325d0, poll_in=1, poll_out=0, poll_err=0) at /home/jenkins/root/workspace/centos7-regression/rpc/rpc-transport/socket/src/socket.c:2618 22:04:50 this = 0x7f86584325d0 22:04:50 priv = 0x7f8658432b30 22:04:50 ret = 0 22:04:50 ctx = 0x18df010 22:04:50 socket_closed = false 22:04:50 notify_handled = false 22:04:50 __FUNCTION__ = "socket_event_handler" 22:04:50 #11 0x00007f867cc7a929 in event_dispatch_epoll_handler (event_pool=0x18e4540, event=0x7f85d8768ea0) at /home/jenkins/root/workspace/centos7-regression/libglusterfs/src/event-epoll.c:587 22:04:50 ev_data = 0x7f85d8768ea4 22:04:50 slot = 0x191c9f0 22:04:50 handler = 0x7f86722d8278 <socket_event_handler> 22:04:50 data = 0x7f86584325d0 22:04:50 idx = 299 22:04:50 gen = 1 22:04:50 ret = -1 22:04:50 fd = 550 22:04:50 handled_error_previously = false 22:04:50 __FUNCTION__ = "event_dispatch_epoll_handler" 22:04:50 #12 0x00007f867cc7ac1c in event_dispatch_epoll_worker (data=0x7f86397bd1d0) at /home/jenkins/root/workspace/centos7-regression/libglusterfs/src/event-epoll.c:663 22:04:50 event = {events = 1, data = {ptr = 0x10000012b, fd = 299, u32 = 299, u64 = 4294967595}} 22:04:50 ret = 1 22:04:50 ev_data = 0x7f86397bd1d0 22:04:50 event_pool = 0x18e4540 22:04:50 myindex = 13 22:04:50 timetodie = 0 22:04:50 __FUNCTION__ = "event_dispatch_epoll_worker" 22:04:50 #13 0x00007f867bc57e25 in start_thread () from /lib64/libpthread.so.0 22:04:50 No symbol table info available. 22:04:50 #14 0x00007f867b31cbad in clone () from /lib64/libc.so.6 22:04:50 No symbol table info available. 22:04:50 ===================== #12077 0x00007ff1ea8b42e8 in server_setvolume (req=0x7ff1c8616458) at server-handshake.c:886 886 op_ret = server_first_lookup (this, client, reply); $3 = {name = 0x7ff1b8b953a0 "/d/backends/vol02/brick2", type = 0x7ff1b8b95410 "performance/decompounder", instance_name = 0x0, next = 0x7ff1b8b928c0, prev = 0x7ff1b8b95970, parents = 0x7ff1b8b97ef0, children = 0x7ff1b8b93ab0, options = 0x0, dlhandle = 0x7ff1ec02e480, fops = 0x7ff1ead11000 <fops>, cbks = 0x7ff1ead11840 <cbks>, dumpops = 0x0, volume_options = {next = 0x7ff1b8b943d0, prev = 0x7ff1b8b943d0}, fini = 0x7ff1eab0df10 <fini>, init = 0x7ff1eab0de40 <init>, reconfigure = 0x0, mem_acct_init = 0x7ff1eab0de20 <mem_acct_init>, dump_metrics = 0x0, notify = 0x7ff1ff7f3d20 <default_notify>, loglevel = GF_LOG_NONE, stats = { total = {metrics = {{fop = {lk = 0x7ff1b8b94418 "", value = 0}, cbk = {lk = 0x7ff1b8b94420 "", value = 0}} <repeats 14 times>, {fop = {lk = 0x7ff1b8b944f8 "\001", value = 1}, cbk = { lk = 0x7ff1b8b94500 "", value = 0}}, {fop = {lk = 0x7ff1b8b94508 "\001", value = 1}, cbk = { lk = 0x7ff1b8b94510 "", value = 0}}, {fop = {lk = 0x7ff1b8b94518 "", value = 0}, cbk = { lk = 0x7ff1b8b94520 "", value = 0}}, {fop = {lk = 0x7ff1b8b94528 "", value = 0}, cbk = { lk = 0x7ff1b8b94530 "", value = 0}}, {fop = {lk = 0x7ff1b8b94538 "\002", value = 2}, cbk = { lk = 0x7ff1b8b94540 "", value = 0}}, {fop = {lk = 0x7ff1b8b94548 "", value = 0}, cbk = { lk = 0x7ff1b8b94550 "", value = 0}}, {fop = {lk = 0x7ff1b8b94558 "\001", value = 1}, cbk = { lk = 0x7ff1b8b94560 "", value = 0}}, {fop = {lk = 0x7ff1b8b94568 "", value = 0}, cbk = { lk = 0x7ff1b8b94570 "", value = 0}}, {fop = {lk = 0x7ff1b8b94578 "", value = 0}, cbk = { lk = 0x7ff1b8b94580 "", value = 0}}, {fop = {lk = 0x7ff1b8b94588 "", value = 0}, cbk = { lk = 0x7ff1b8b94590 "", value = 0}}, {fop = {lk = 0x7ff1b8b94598 "", value = 0}, cbk = { lk = 0x7ff1b8b945a0 "", value = 0}}, {fop = {lk = 0x7ff1b8b945a8 "", value = 0}, cbk = { lk = 0x7ff1b8b945b0 "", value = 0}}, {fop = {lk = 0x7ff1b8b945b8 "", value = 0}, cbk = { lk = 0x7ff1b8b945c0 "", value = 0}}, {fop = {lk = 0x7ff1b8b945c8 "\006", value = 6}, cbk = { lk = 0x7ff1b8b945d0 "", value = 0}}, {fop = {lk = 0x7ff1b8b945d8 "\002", value = 2}, cbk = { lk = 0x7ff1b8b945e0 "", value = 0}}, {fop = {lk = 0x7ff1b8b945e8 "", value = 0}, cbk = { ---Type <return> to continue, or q <return> to quit--- lk = 0x7ff1b8b945f0 "", value = 0}} <repeats 29 times>}, count = {lk = 0x7ff1b8b947b8 "\r", value = 13}}, interval = {latencies = {{min = 4294967295, max = 0, total = 0, count = 0} <repeats 58 times>}, metrics = {{fop = {lk = 0x7ff1b8b94f00 "", value = 0}, cbk = { lk = 0x7ff1b8b94f08 "", value = 0}} <repeats 14 times>, {fop = {lk = 0x7ff1b8b94fe0 "\001", value = 1}, cbk = {lk = 0x7ff1b8b94fe8 "", value = 0}}, {fop = {lk = 0x7ff1b8b94ff0 "\001", value = 1}, cbk = {lk = 0x7ff1b8b94ff8 "", value = 0}}, {fop = {lk = 0x7ff1b8b95000 "", value = 0}, cbk = {lk = 0x7ff1b8b95008 "", value = 0}}, {fop = {lk = 0x7ff1b8b95010 "", value = 0}, cbk = {lk = 0x7ff1b8b95018 "", value = 0}}, {fop = {lk = 0x7ff1b8b95020 "\002", value = 2}, cbk = {lk = 0x7ff1b8b95028 "", value = 0}}, {fop = {lk = 0x7ff1b8b95030 "", value = 0}, cbk = {lk = 0x7ff1b8b95038 "", value = 0}}, {fop = {lk = 0x7ff1b8b95040 "\001", value = 1}, cbk = {lk = 0x7ff1b8b95048 "", value = 0}}, {fop = {lk = 0x7ff1b8b95050 "", value = 0}, cbk = {lk = 0x7ff1b8b95058 "", value = 0}}, {fop = {lk = 0x7ff1b8b95060 "", value = 0}, cbk = {lk = 0x7ff1b8b95068 "", value = 0}}, {fop = {lk = 0x7ff1b8b95070 "", value = 0}, cbk = {lk = 0x7ff1b8b95078 "", value = 0}}, {fop = {lk = 0x7ff1b8b95080 "", value = 0}, cbk = {lk = 0x7ff1b8b95088 "", value = 0}}, {fop = {lk = 0x7ff1b8b95090 "", value = 0}, cbk = {lk = 0x7ff1b8b95098 "", value = 0}}, {fop = {lk = 0x7ff1b8b950a0 "", value = 0}, cbk = {lk = 0x7ff1b8b950a8 "", value = 0}}, {fop = {lk = 0x7ff1b8b950b0 "\006", value = 6}, cbk = {lk = 0x7ff1b8b950b8 "", value = 0}}, {fop = {lk = 0x7ff1b8b950c0 "\002", value = 2}, cbk = {lk = 0x7ff1b8b950c8 "", value = 0}}, {fop = {lk = 0x7ff1b8b950d0 "", value = 0}, cbk = {lk = 0x7ff1b8b950d8 "", value = 0}} <repeats 29 times>}, count = { lk = 0x7ff1b8b952a0 "\r", value = 13}}}, history = 0x0, ctx = 0x55f373dd3010, graph = 0x7ff1b8b78d50, itable = 0x7ff1c87f08c0, init_succeeded = 1 '\001', private = 0x0, mem_acct = 0x0, winds = 0, switched = 0 '\000', local_pool = 0x0, is_autoloaded = false, volfile_id = 0x7ff1b8b7a060 "patchy-vol02.vm252-98.gsslab.pnq2.redhat.com.d-backends-vol02-brick2", ---Type <return> to continue, or q <return> to quit--- xl_id = 19, op_version = {0, 0, 0, 0}, flags = 0, id = 0, identifier = 0x0, pass_through = false, pass_through_fops = 0x7ff1ffa27300 <_default_fops>, cleanup_starting = 1, call_cleanup = 1} ....... ....... ........ RCA: After checked client->bound_xl in server_setvolume it shows clearly call_cleanup/cleanup_starting flag is 1, it means fini is already called for this xlator and we do already check a cleanup_starting flag in the start of set_volume function that means flag was set by cleanup thread later. Once xlator is saved in client->bound_xl after call gf_authenticate in server_setvolume then cleanup_thread will not call xlator_fini unless it will not receive disconnect response notification for this client. After put a cleanup_starting flag check just before call server_first_lookup issue will resolve. Regards Mohit Agrawal
REVIEW: https://review.gluster.org/20427 (glusterfs: Brick process is crash at the time of call server_first_lookup) posted (#3) for review on master by MOHIT AGRAWAL
COMMIT: https://review.gluster.org/20427 committed in master by "Amar Tumballi" <amarts> with a commit message- glusterfs: Brick process is crash at the time of call server_first_lookup Problem: Brick process is getting crash while executing test case tests/bugs/core/bug-1432542-mpx-restart-crash.t Solution: At the time of initiating connection with brick process by client brick process call server_setvolume.If cleanup thread has set cleanup_starting flag after check flag by server_setvolume then a brick process can crash at the time of calling lookup on brick root.To avoid crash check cleanup_starting flag before just call server_first_lookup BUG: 1597627 Change-Id: I12542c124c76429184df34a04c1eae1a30052ca7 fixes: bz#1597627 Signed-off-by: Mohit Agrawal <moagrawa> Note: To test the patch executing test case tests/bugs/core/bug-1432542-mpx-restart-crash.t in a loop around 100 times
*** Bug 1609724 has been marked as a duplicate of this bug. ***
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/