Bug 1380619 - Ganesha crashes with segfault while doing refresh-config with 3.2 builds.
Summary: Ganesha crashes with segfault while doing refresh-config with 3.2 builds.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: io-threads
Version: rhgs-3.2
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: RHGS 3.2.0
Assignee: Pranith Kumar K
QA Contact: surabhi
URL:
Whiteboard:
Depends On:
Blocks: 1351528 1381353 1382065 1384865
TreeView+ depends on / blocked
 
Reported: 2016-09-30 07:33 UTC by Shashank Raj
Modified: 2017-03-23 06:06 UTC (History)
14 users (show)

Fixed In Version: glusterfs-3.8.4-3
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-23 06:06:31 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description Shashank Raj 2016-09-30 07:33:05 UTC
Description of problem:

Ganesha crashes with segfault while doing refresh-config with 3.2 builds.

Version-Release number of selected component (if applicable):

[root@dhcp43-110 ~]# rpm -qa|grep glusterfs
glusterfs-api-3.8.4-1.el7rhgs.x86_64
glusterfs-fuse-3.8.4-1.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-1.el7rhgs.x86_64
glusterfs-debuginfo-3.8.4-1.el7rhgs.x86_64
glusterfs-libs-3.8.4-1.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-1.el7rhgs.x86_64
glusterfs-cli-3.8.4-1.el7rhgs.x86_64
glusterfs-ganesha-3.8.4-1.el7rhgs.x86_64
glusterfs-3.8.4-1.el7rhgs.x86_64
glusterfs-server-3.8.4-1.el7rhgs.x86_64
glusterfs-events-3.8.4-1.el7rhgs.x86_64

[root@dhcp43-110 ~]# rpm -qa|grep ganesha
nfs-ganesha-debuginfo-2.4.0-2.el7rhgs.x86_64
nfs-ganesha-gluster-2.4.0-2.el7rhgs.x86_64
glusterfs-ganesha-3.8.4-1.el7rhgs.x86_64
nfs-ganesha-2.4.0-2.el7rhgs.x86_64


How reproducible:

Consistent

Steps to Reproduce:
1. Create ganesha setup on a 4 node cluster
2. Create a volume and enable ganesha on the volume.
3. Edit some parameters in the volume config file.
4. Run refresh-config from one of the node.
5. Observe that ganesha crashes on all the nodes with segmentation fault error
6. Following bt is seen with gdb

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fcdc47a2700 (LWP 8512)]
0x00007fcdc4bde6f3 in ?? ()
(gdb) bt
#0  0x00007fcdc4bde6f3 in ?? ()
#1  0x0000000000000000 in ?? ()
(gdb) generate-core-file 
warning: target file /proc/8443/cmdline contained unexpected null characters
Couldn't get registers: No such process.
(gdb) 

Note: couldn't get core file even with gdb. The issue can be easily reproducible while running refresh config (once or twice)


Actual results:

Ganesha crashes with segfault while doing refresh-config with 3.2 builds.

Expected results:

There should not be any crashes.

Additional info:

Comment 2 Shashank Raj 2016-09-30 09:51:01 UTC
These are the messages which are seen in dmesg during 2 seg-faults:

[104058.941501] ganesha.nfsd[2075]: segfault at 7f9400d216f3 ip 00007f9400d216f3 sp 00007f937d7f54f0 error 14 in quick-read.so[7f9400b4b000+1ff000]

[182504.941484] ganesha.nfsd[27850]: segfault at 7f546c2266f3 ip 00007f546c2266f3 sp 00007f545f4da4f0 error 14 in open-behind.so[7f546c09f000+200000]

Comment 3 Shashank Raj 2016-10-03 09:07:18 UTC
Able to generate the cores: Can be accessed at http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1380619/

Comment 4 Soumya Koduri 2016-10-03 10:57:30 UTC
The stack seem to be corrupted most of the times. And from dmesg, we can be almost sure that there is some issue with gluster libraries -

[root@dhcp43-110 ~]# dmesg | grep segfault
[104058.941501] ganesha.nfsd[2075]: segfault at 7f9400d216f3 ip 00007f9400d216f3 sp 00007f937d7f54f0 error 14 in quick-read.so[7f9400b4b000+1ff000]
[182504.941484] ganesha.nfsd[27850]: segfault at 7f546c2266f3 ip 00007f546c2266f3 sp 00007f545f4da4f0 error 14 in open-behind.so[7f546c09f000+200000]
[434401.941505] ganesha.nfsd[26018]: segfault at 7fc3110b26f3 ip 00007fc3110b26f3 sp 00007fc3103754f0 error 14 in libssl.so.1.0.1e[7fc31286b000+63000]
[440489.941585] ganesha.nfsd[21722]: segfault at 7f81727996f3 ip 00007f81727996f3 sp 00007f81893834f0 error 14
[442058.489944] ganesha.nfsd[26590]: segfault at 6e ip 00007ffac3df96f3 sp 00007ffad82df070 error 6 in dht.so[7ffac3d7b000+7f000]
[445434.817032] ganesha.nfsd[2659]: segfault at 6e ip 00007f008029d6f3 sp 00007f0080ae0070 error 6 in dht.so[7f008021f000+7f000]
[root@dhcp43-110 ~]#

Comment 5 Soumya Koduri 2016-10-03 10:59:41 UTC
Using gdb, when ran refresh-config around 10times, only twice I got below proper stack traces which are very unusual to hit -

Program received signal SIGABRT, Aborted.
[Switching to Thread 0x7f7e727fc700 (LWP 15998)]
0x00007f7f45be35f7 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
56	  return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
(gdb) bt
#0  0x00007f7f45be35f7 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x00007f7f45be4ce8 in __GI_abort () at abort.c:90
#2  0x00007f7f45c23327 in __libc_message (do_abort=do_abort@entry=2, 
    fmt=fmt@entry=0x7f7f45d2d488 "*** Error in `%s': %s: 0x%s ***\n") at ../sysdeps/unix/sysv/linux/libc_fatal.c:196
#3  0x00007f7f45c2b053 in malloc_printerr (ar_ptr=0x7f7f30000020, ptr=<optimized out>, 
    str=0x7f7f45d2d4d0 "free(): invalid next size (fast)", action=3) at malloc.c:5022
#4  _int_free (av=0x7f7f30000020, p=<optimized out>, have_lock=0) at malloc.c:3842
#5  0x00007f7f42b452bd in data_destroy (data=0x7f7e8ca80788) at dict.c:226
#6  0x00007f7f42b45f41 in dict_set_lk (this=this@entry=0x7f7e8c8c2cc8, 
    key=key@entry=0x7f7f4291bab4 "transport-type", value=value@entry=0x7f7e8ca810e0, replace=replace@entry=_gf_true)
    at dict.c:351
#7  0x00007f7f42b460b1 in dict_set (this=0x7f7e8c8c2cc8, key=0x7f7f4291bab4 "transport-type", value=0x7f7e8ca810e0)
    at dict.c:424
#8  0x00007f7f42b4828c in dict_set_str (this=this@entry=0x7f7e8c8c2cc8, 
    key=key@entry=0x7f7f4291bab4 "transport-type", str=str@entry=0x7f7f4291bc3c "socket") at dict.c:2168
#9  0x00007f7f42912ca5 in rpc_transport_load (ctx=ctx@entry=0x7f7e8c006120, options=options@entry=0x7f7e8c8c2cc8, 
    trans_name=trans_name@entry=0x7f7f310c1100 "nfsvol1-client-11") at rpc-transport.c:238
#10 0x00007f7f42917270 in rpc_clnt_connection_init (name=0x7f7f310c1100 "nfsvol1-client-11", 
    options=0x7f7e8c8c2cc8, ctx=0x7f7e8c006120, clnt=0x7f7f317d10e0) at rpc-clnt.c:1054
#11 rpc_clnt_new (options=0x7f7e8c8c2cc8, owner=owner@entry=0x7f7f33114f20, 
    name=0x7f7f310c1100 "nfsvol1-client-11", reqpool_size=<optimized out>, reqpool_size@entry=0) at rpc-clnt.c:1143
#12 0x00007f7f3cf484ae in client_init_rpc (this=this@entry=0x7f7f33114f20) at client.c:2527
#13 0x00007f7f3cf4962c in init (this=0x7f7f33114f20) at client.c:2750
#14 0x00007f7f42b4af1f in __xlator_init (xl=0x7f7f33114f20) at xlator.c:403
#15 xlator_init (xl=xl@entry=0x7f7f33114f20) at xlator.c:428
#16 0x00007f7f42b81379 in glusterfs_graph_init (graph=graph@entry=0x7f7f32df1480) at graph.c:320
#17 0x00007f7f42b81c9b in glusterfs_graph_activate (graph=graph@entry=0x7f7f32df1480, ctx=ctx@entry=0x7f7e8c006120)
    at graph.c:670
#18 0x00007f7f42e1f0fd in glfs_process_volfp (fs=fs@entry=0x7f7e8c0035f0, fp=fp@entry=0x7f7f30020630)
    at glfs-mgmt.c:78
#19 0x00007f7f42e1f2b6 in glfs_mgmt_getspec_cbk (req=<optimized out>, iov=<optimized out>, count=<optimized out>, 
    myframe=0x7f7e8c03c31c) at glfs-mgmt.c:604
#20 0x00007f7f42917720 in rpc_clnt_handle_reply (clnt=clnt@entry=0x7f7e8c008370, pollin=pollin@entry=0x7f7f31ee8b00)
    at rpc-clnt.c:791
#21 0x00007f7f429179ff in rpc_clnt_notify (trans=<optimized out>, mydata=0x7f7e8c0083a0, event=<optimized out>, 
    data=0x7f7f31ee8b00) at rpc-clnt.c:962
#22 0x00007f7f42913923 in rpc_transport_notify (this=this@entry=0x7f7e8c03a890, 
    event=event@entry=RPC_TRANSPORT_MSG_RECEIVED, data=data@entry=0x7f7f31ee8b00) at rpc-transport.c:541
#23 0x00007f7f34c99e94 in socket_event_poll_in (this=this@entry=0x7f7e8c03a890) at socket.c:2267
#24 0x00007f7f34c9c334 in socket_event_handler (fd=<optimized out>, idx=0, data=0x7f7e8c03a890, poll_in=1, 
    poll_out=0, poll_err=0) at socket.c:2380
#25 0x00007f7f42ba6f50 in event_dispatch_epoll_handler (event=0x7f7e727fb540, event_pool=0x7f7e8c0316c0)
    at event-epoll.c:571
#26 event_dispatch_epoll_worker (data=0x7f7f14002be0) at event-epoll.c:674
#27 0x00007f7f465d7dc5 in start_thread (arg=0x7f7e727fc700) at pthread_create.c:308
#28 0x00007f7f45ca4ced in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
(gdb) f 5
#5  0x00007f7f42b452bd in data_destroy (data=0x7f7e8ca80788) at dict.c:226
226	                        GF_FREE (data->data);
(gdb) p data->data
$1 = 0x7f7f30037cc0 "tcp"
(gdb) f 9
#9  0x00007f7f42912ca5 in rpc_transport_load (ctx=ctx@entry=0x7f7e8c006120, options=options@entry=0x7f7e8c8c2cc8, 
    trans_name=trans_name@entry=0x7f7f310c1100 "nfsvol1-client-11") at rpc-transport.c:238
238				ret = dict_set_str (options,
(gdb) l
233	
234				if (ret < 0)
235					gf_log ("dict", GF_LOG_DEBUG,
236						"setting address-family failed");
237	
238				ret = dict_set_str (options,
239						    "transport-type", "socket");
240				if (ret < 0)
241					gf_log ("dict", GF_LOG_DEBUG,
242						"setting transport-type failed");
(gdb) f 7
#7  0x00007f7f42b460b1 in dict_set (this=0x7f7e8c8c2cc8, key=0x7f7f4291bab4 "transport-type", value=0x7f7e8ca810e0)
    at dict.c:424
424	        ret = dict_set_lk (this, key, value, 1);
(gdb) f 6
#6  0x00007f7f42b45f41 in dict_set_lk (this=this@entry=0x7f7e8c8c2cc8, 
    key=key@entry=0x7f7f4291bab4 "transport-type", value=value@entry=0x7f7e8ca810e0, replace=replace@entry=_gf_true)
    at dict.c:351
351	                        data_unref (unref_data);
(gdb)

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

[New Thread 0x7f7568935700 (LWP 682)]

Program received signal SIGILL, Illegal instruction.
[Switching to Thread 0x7f75304e9700 (LWP 30937)]
0x00007f754f7d46f3 in STACK_DESTROY (stack=0x0) at ../../../../libglusterfs/src/stack.h:161
161	        LOCK (&stack->pool->lock);
(gdb) bt
#0  0x00007f754f7d46f3 in STACK_DESTROY (stack=0x0) at ../../../../libglusterfs/src/stack.h:161
#1  ioc_page_fault (ioc_inode=0x7f751c018588, frame=0x7f751c018560, fd=0xffffffff00000000, offset=0) at page.c:608
#2  0x0000000000000000 in ?? ()
(gdb)

Comment 6 Soumya Koduri 2016-10-03 11:11:09 UTC
FYI - it is not just with refresh-config, this issue is hit with volume re-export using 'ganesha.enable' CLI option as well.

Comment 7 Soumya Koduri 2016-10-03 13:09:29 UTC
Shashank could reproduce this issue on RHEL-6 builds as well but strangely we are unable to reproduce it in our dev fedora machines.

Comment 8 Soumya Koduri 2016-10-03 13:12:41 UTC
Got below crash at one point -

Program received signal SIGABRT, Aborted.
[Switching to Thread 0x7f14fe49a700 (LWP 8798)]
0x00007f153d8955f7 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
56	  return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
(gdb) bt
#0  0x00007f153d8955f7 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x00007f153d896ce8 in __GI_abort () at abort.c:90
#2  0x00007f153d8d5327 in __libc_message (do_abort=do_abort@entry=2, 
    fmt=fmt@entry=0x7f153d9df488 "*** Error in `%s': %s: 0x%s ***\n") at ../sysdeps/unix/sysv/linux/libc_fatal.c:196
#3  0x00007f153d8dd053 in malloc_printerr (ar_ptr=0x7f1520000020, ptr=<optimized out>, 
    str=0x7f153d9df4d0 "free(): invalid next size (fast)", action=3) at malloc.c:5022
#4  _int_free (av=0x7f1520000020, p=<optimized out>, have_lock=0) at malloc.c:3842
#5  0x00007f153a7fc32a in xlator_dynload (xl=xl@entry=0x7f152004e150) at xlator.c:283
#6  0x00007f153a7fcbe9 in xlator_set_type (xl=xl@entry=0x7f152004e150, type=type@entry=0x7f153a89d723 "meta")
    at xlator.c:295
#7  0x00007f153a833070 in glusterfs_graph_insert (graph=graph@entry=0x7f152000d150, ctx=ctx@entry=0x7f14f8004820, 
    type=type@entry=0x7f153a89d723 "meta", name=name@entry=0x7f153a89d715 "meta-autoload", 
    autoload=autoload@entry=_gf_true) at graph.c:148
#8  0x00007f153a833203 in glusterfs_graph_meta (graph=graph@entry=0x7f152000d150, ctx=ctx@entry=0x7f14f8004820)
---Type <return> to continue, or q <return> to quit--- 
    at graph.c:207
#9  0x00007f153a8337f4 in glusterfs_graph_prepare (graph=graph@entry=0x7f152000d150, ctx=ctx@entry=0x7f14f8004820)
    at graph.c:503
#10 0x00007f153aad10eb in glfs_process_volfp (fs=fs@entry=0x7f14f8bfe400, fp=fp@entry=0x7f1520000a00)
    at glfs-mgmt.c:72
#11 0x00007f153aad12b6 in glfs_mgmt_getspec_cbk (req=<optimized out>, iov=<optimized out>, count=<optimized out>, 
    myframe=0x7f14f803b6bc) at glfs-mgmt.c:604
#12 0x00007f153a5c9720 in rpc_clnt_handle_reply (clnt=clnt@entry=0x7f14f8030a60, pollin=pollin@entry=0x7f1520002570)
    at rpc-clnt.c:791
#13 0x00007f153a5c99ff in rpc_clnt_notify (trans=<optimized out>, mydata=0x7f14f8030a90, event=<optimized out>, 
    data=0x7f1520002570) at rpc-clnt.c:962
#14 0x00007f153a5c5923 in rpc_transport_notify (this=this@entry=0x7f14f8032510, 
    event=event@entry=RPC_TRANSPORT_MSG_RECEIVED, data=data@entry=0x7f1520002570) at rpc-transport.c:541
#15 0x00007f152c94be94 in socket_event_poll_in (this=this@entry=0x7f14f8032510) at socket.c:2267
---Type <return> to continue, or q <return> to quit---
#16 0x00007f152c94e334 in socket_event_handler (fd=<optimized out>, idx=0, data=0x7f14f8032510, poll_in=1, 
    poll_out=0, poll_err=0) at socket.c:2380
#17 0x00007f153a858f50 in event_dispatch_epoll_handler (event=0x7f14fe499540, event_pool=0x7f14f8037440)
    at event-epoll.c:571
#18 event_dispatch_epoll_worker (data=0x7f150c0052a0) at event-epoll.c:674
#19 0x00007f153e289dc5 in start_thread (arg=0x7f14fe49a700) at pthread_create.c:308
#20 0x00007f153d956ced in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
(gdb) f 5
#5  0x00007f153a7fc32a in xlator_dynload (xl=xl@entry=0x7f152004e150) at xlator.c:283
283	        GF_FREE (name);
(gdb) p name
$1 = 0x7f1520022450 "/usr/lib64/glusterfs/3.8.4/xlator/meta.so"
(gdb) f 4
#4  _int_free (av=0x7f1520000020, p=<optimized out>, have_lock=0) at malloc.c:3842
3842	      malloc_printerr (check_action, errstr, chunk2mem(p), av);
(gdb) l
3837	    {
3838	      errstr = "free(): invalid pointer";
3839	    errout:
3840	      if (have_lock || locked)
3841		(void)mutex_unlock(&av->mutex);
3842	      malloc_printerr (check_action, errstr, chunk2mem(p), av);
3843	      if (have_lock)
3844		mutex_lock(&av->mutex);
3845	      return;
3846	    }
(gdb) f 5
#5  0x00007f153a7fc32a in xlator_dynload (xl=xl@entry=0x7f152004e150) at xlator.c:283
283	        GF_FREE (name);
(gdb) p ret
$2 = 0
(gdb) p global_xlator
$3 = {name = 0x7f153a896913 "glusterfs", type = 0x7f153a89b9ba "global", instance_name = 0x0, next = 0x0, 
  prev = 0x0, parents = 0x0, children = 0x0, options = 0x0, dlhandle = 0x0, fops = 0x0, cbks = 0x0, dumpops = 0x0, 
  volume_options = {next = 0x7f153aac7340 <global_xlator+96>, prev = 0x7f153aac7340 <global_xlator+96>}, 
  fini = 0x0, init = 0x0, reconfigure = 0x0, mem_acct_init = 0x0, notify = 0x0, loglevel = GF_LOG_NONE, 
  latencies = {{min = 0, max = 0, total = 0, std = 0, mean = 0, count = 0} <repeats 55 times>}, history = 0x0, 
  ctx = 0x7f15402f2a60, graph = 0x0, itable = 0x0, init_succeeded = 0 '\000', private = 0x0, 
  mem_acct = 0x7f14f80340d0, winds = 0, switched = 0 '\000', local_pool = 0x0, is_autoloaded = _gf_false}
(gdb) p global_xlator->ctx
$4 = (glusterfs_ctx_t *) 0x7f15402f2a60
(gdb) p global_xlator->ctx->mem_acct_enable 
$5 = 1
(gdb) p sizeof (struct mem_header)
$6 = 64
(gdb) p name - 64
$7 = 0x7f1520022410 "'"
(gdb) p name
$8 = 0x7f1520022450 "/usr/lib64/glusterfs/3.8.4/xlator/meta.so"
(gdb) p ((struct mem_header *)) $7
A syntax error in expression, near `) $7'.
(gdb) p (struct mem_header *) $7
$9 = (struct mem_header *) 0x7f1520022410
(gdb) p $9->magic
$10 = 3405691582
(gdb) p/x $9->magic
$11 = 0xcafebabe
(gdb) p/x $9->size
$12 = 0x2a
(gdb) p $9->size
$13 = 42
(gdb) p/x name+$13
$14 = 0x7f152002247a
(gdb) p *(uint32_t *)((char *)name + $9->size)
$15 = 3131961357
(gdb) p/x *(uint32_t *)((char *)name + $9->size)
$16 = 0xbaadf00d

>>> that means 'name' ptr is intact and is allocated using GF_MALLOC.

(gdb) 
(gdb) p $9->mem_acct 
$17 = (struct mem_acct *) 0x7f14f8c1bf70
(gdb) p $9->type
$18 = 39
(gdb) p $17.rec[39]
$19 = {typestr = 0x7f153a89b4e5 "gf_common_mt_asprintf", size = 60, max_size = 186, num_allocs = 5, 
  total_allocs = 22, max_num_allocs = 6, lock = {spinlock = 1, mutex = {__data = {__lock = 1, __count = 0, 
        __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __list = {__prev = 0x0, __next = 0x0}}, 
      __size = "\001", '\000' <repeats 38 times>, __align = 1}}}
(gdb) p $17.rec[39].lock
$20 = {spinlock = 1, mutex = {__data = {__lock = 1, __count = 0, __owner = 0, __nusers = 0, __kind = 0, 
      __spins = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = "\001", '\000' <repeats 38 times>, __align = 1}}
(gdb) p name
$21 = 0x7f1520022450 "/usr/lib64/glusterfs/3.8.4/xlator/meta.so"


(gdb) p $17.rec[39]
$29 = {typestr = 0x7f153a89b4e5 "gf_common_mt_asprintf", size = 60, max_size = 186, num_allocs = 5, 
  total_allocs = 22, max_num_allocs = 6, lock = {spinlock = 1, mutex = {__data = {__lock = 1, __count = 0, 
        __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __list = {__prev = 0x0, __next = 0x0}}, 
      __size = "\001", '\000' <repeats 38 times>, __align = 1}}}
(gdb) p $17
$30 = (struct mem_acct *) 0x7f14f8c1bf70


(gdb) p *$
$31 = {num_types = 163, lock = {spinlock = 1, mutex = {__data = {__lock = 1, __count = 0, __owner = 0, 
        __nusers = 0, __kind = 0, __spins = 0, __list = {__prev = 0x0, __next = 0x0}}, 
      __size = "\001", '\000' <repeats 38 times>, __align = 1}}, refcnt = 133, rec = 0x7f14f8c1bfa8}
(gdb) p global_xlator
$32 = {name = 0x7f153a896913 "glusterfs", type = 0x7f153a89b9ba "global", instance_name = 0x0, next = 0x0, 
  prev = 0x0, parents = 0x0, children = 0x0, options = 0x0, dlhandle = 0x0, fops = 0x0, cbks = 0x0, dumpops = 0x0, 
  volume_options = {next = 0x7f153aac7340 <global_xlator+96>, prev = 0x7f153aac7340 <global_xlator+96>}, 
  fini = 0x0, init = 0x0, reconfigure = 0x0, mem_acct_init = 0x0, notify = 0x0, loglevel = GF_LOG_NONE, 
  latencies = {{min = 0, max = 0, total = 0, std = 0, mean = 0, count = 0} <repeats 55 times>}, history = 0x0, 
  ctx = 0x7f15402f2a60, graph = 0x0, itable = 0x0, init_succeeded = 0 '\000', private = 0x0, 
  mem_acct = 0x7f14f80340d0, winds = 0, switched = 0 '\000', local_pool = 0x0, is_autoloaded = _gf_false}
(gdb) p global_xlator->ctx
$33 = (glusterfs_ctx_t *) 0x7f15402f2a60
(gdb) p global_xlator->ctx->m
master           measure_latency  mem_acct_enable  mempool_list     mgmt             mnt_pid
(gdb) p global_xlator->mem_acct
$34 = (struct mem_acct *) 0x7f14f80340d0
(gdb)

IIUC, since this is in epoll thread, THIS should be pointing to &global_xlator. That means 'name' string variable should have been allocated using &global_xlator->mem_acct. But $34 (global_xlator->mem_acct) and $17 (name variable header->mem_acct) are different.

Comment 9 Soumya Koduri 2016-10-03 13:59:09 UTC
(gdb) thread 4
[Switching to thread 4 (Thread 0x7f15006a3700 (LWP 7933))]
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
185	62:	movl	(%rsp), %edi
(gdb) bt
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f153aacfcf3 in glfs_lock (fs=fs@entry=0x7f14f8bfe400) at glfs-internal.h:318
#2  glfs_init_wait (fs=fs@entry=0x7f14f8bfe400) at glfs.c:887
#3  0x00007f153aad01c0 in pub_glfs_init (fs=fs@entry=0x7f14f8bfe400) at glfs.c:997
#4  0x00007f153aefce08 in glusterfs_create_export (fsal_hdl=0x7f153b10b390 <GlusterFS+112>, 
    parse_node=0x7f14f8c19e90, err_type=<optimized out>, up_ops=0x7f154002c800 <fsal_up_top>)
    at /usr/src/debug/nfs-ganesha-2.4.0/src/FSAL/FSAL_GLUSTER/export.c:675
#5  0x00007f153fda2480 in fsal_cfg_commit (node=0x7f14f8c19e90, link_mem=0x7f14f8c19d08, 
    self_struct=<optimized out>, err_type=0x7f15006a21c0)
    at /usr/src/debug/nfs-ganesha-2.4.0/src/support/exports.c:746
#6  0x00007f153fddad46 in proc_block (node=<optimized out>, item=<optimized out>, link_mem=<optimized out>, 
    err_type=<optimized out>) at /usr/src/debug/nfs-ganesha-2.4.0/src/config_parsing/config_parsing.c:1337
#7  0x00007f153fdda0d2 in do_block_load (err_type=<optimized out>, param_struct=<optimized out>, 
    relax=<optimized out>, params=<optimized out>, blk=<optimized out>)
---Type <return> to continue, or q <return> to quit---
    at /usr/src/debug/nfs-ganesha-2.4.0/src/config_parsing/config_parsing.c:1195
#8  proc_block (node=<optimized out>, item=<optimized out>, link_mem=<optimized out>, err_type=<optimized out>)
    at /usr/src/debug/nfs-ganesha-2.4.0/src/config_parsing/config_parsing.c:1321
#9  0x00007f153fddb5b9 in load_config_from_node (tree_node=0x7f14f8bfce80, 
    conf_blk=0x7f1540030200 <add_export_param>, param=param@entry=0x0, unique=unique@entry=false, 
    err_type=err_type@entry=0x7f15006a21c0)
    at /usr/src/debug/nfs-ganesha-2.4.0/src/config_parsing/config_parsing.c:1836
#10 0x00007f153fdb1ab7 in gsh_export_addexport (args=<optimized out>, reply=0x7f15402e4fb0, error=0x7f15006a22e0)
    at /usr/src/debug/nfs-ganesha-2.4.0/src/support/export_mgr.c:967
#11 0x00007f153fdd6319 in dbus_message_entrypoint (conn=0x7f15402e4cd0, msg=msg@entry=0x7f15402e5160, 
    user_data=user_data@entry=0x7f1540031c60 <export_interfaces>)
    at /usr/src/debug/nfs-ganesha-2.4.0/src/dbus/dbus_server.c:512
#12 0x00007f153f66dc76 in _dbus_object_tree_dispatch_and_unlock (tree=0x7f15402eb100, 
    message=message@entry=0x7f15402e5160, found_object=found_object@entry=0x7f15006a2484) at dbus-object-tree.c:862
---Type <return> to continue, or q <return> to quit---
#13 0x00007f153f65fe49 in dbus_connection_dispatch (connection=connection@entry=0x7f15402e4cd0)
    at dbus-connection.c:4672
#14 0x00007f153f6600e2 in _dbus_connection_read_write_dispatch (connection=0x7f15402e4cd0, 
    timeout_milliseconds=timeout_milliseconds@entry=100, dispatch=dispatch@entry=1) at dbus-connection.c:3646
#15 0x00007f153f660180 in dbus_connection_read_write_dispatch (connection=<optimized out>, 
    timeout_milliseconds=timeout_milliseconds@entry=100) at dbus-connection.c:3729
#16 0x00007f153fdd7390 in gsh_dbus_thread (arg=<optimized out>)
    at /usr/src/debug/nfs-ganesha-2.4.0/src/dbus/dbus_server.c:737
#17 0x00007f153e289dc5 in start_thread (arg=0x7f15006a3700) at pthread_create.c:308
#18 0x00007f153d956ced in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
(gdb) f 2
#2  glfs_init_wait (fs=fs@entry=0x7f14f8bfe400) at glfs.c:887
887		glfs_lock (fs);
(gdb) p fd
$35 = -1
(gdb) p fs
$36 = (struct glfs *) 0x7f14f8bfe400
(gdb) p fs->ctx
$37 = (glusterfs_ctx_t *) 0x7f14f8004820
(gdb) p fs->ctx->master
$38 = (void *) 0x7f14f8007500
(gdb) p fs->ctx->mem_acct
There is no member named mem_acct.
(gdb) p (xlator_t *)fs->ctx->master
$39 = (xlator_t *) 0x7f14f8007500
(gdb) p $39->mem_acct
$40 = (struct mem_acct *) 0x7f14f8c1bf70
(gdb) 

So $17 mentioned in the above comment is actually referring to the 'fs' being initialized. Maybe that is the way it should be.

Comment 11 Soumya Koduri 2016-10-05 07:12:22 UTC
After a whole day of vigorous debugging by commenting out various parts of the code, I find the culprit here to be io-threads xlator and ofcourse only with glfs_fini involved :). It looks like a day1 issue. But the reason it was not hit before is because io-threads was always disabled. It was recently modified to be enabled by default by the below commit  --

commit d872c76e0602dd629698c168ff26623f2e539107
    mgmt/glusterd: Enable client-io-threads by default
    
    Details of upstream patch:
    >> Change-Id: Ic013a772ffc3f8c93673bbee064ff4cc372fe128
    >> Reviewed-on: http://review.gluster.org/15051


To start with, with every glfs_init(), there are few iot worker threads  spawned (if the above option is enabled). But these are not cleaned up as part of glfs_fini(). Can be seen from below example. 

-- When nfs-ganesha hasn't exported any volumes, there are around 31 theads spawned -

  Id   Target Id         Frame 
  31   Thread 0x7fd39fdeb700 (LWP 2936) "ganesha.nfsd" pthread_cond_timedwait@@GLIBC_2.3.2 ()
    at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:225
  30   Thread 0x7fd39f4aa700 (LWP 2937) "ganesha.nfsd" pthread_cond_timedwait@@GLIBC_2.3.2 ()
    at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:225
  29   Thread 0x7fd3a3b50700 (LWP 2952) "ganesha.nfsd" pthread_cond_timedwait@@GLIBC_2.3.2 ()
    at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:225
  28   Thread 0x7fd39b215700 (LWP 2983) "ganesha.nfsd" pthread_cond_wait@@GLIBC_2.3.2 ()
    at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
  27   Thread 0x7fd39aa14700 (LWP 2984) "ganesha.nfsd" do_sigwait (sig=0x7fd39aa1358c, set=<optimized out>)
    at ../sysdeps/unix/sysv/linux/sigwait.c:64


-- Now lets export a volume, 
# dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.AddExport string:/etc/ganesha/ganesha-brick-vol.conf string:"EXPORT(Path=/brick_vol)"
method return time=1475650444.814049 sender=:1.51 -> destination=:1.52 serial=7 reply_serial=2
   string "1 exports added"


There is a iot-worker thead added -

>>>>
Thread 39 (Thread 0x7fd37c2d1700 (LWP 3024)):
#0  pthread_cond_timedwait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:225
#1  0x00007fd37c30eac3 in iot_worker (data=0x7fd3700222c0) at io-threads.c:176
#2  0x00007fd3a295d60a in start_thread (arg=0x7fd37c2d1700) at pthread_create.c:334
#3  0x00007fd3a2276bbd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
<<<<

--- Unexport the volume -
#dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.RemoveExport uint16:81
method return time=1475650577.515758 sender=:1.51 -> destination=:1.53 serial=16 reply_serial=2

iot-worker thread is not cleaned up.
>>>
Thread 39 (Thread 0x7fd37c2d1700 (LWP 3024)):
#0  pthread_cond_timedwait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:225
#1  0x00007fd37c30eac3 in ?? ()
#2  0x00007fd37c2d053c in ?? ()
#3  0x0000000000000000 in ?? ()
<<<

With each re-export of the volume, these orphaned threads pile up and eventually result in seg fault -

Thread 47 (Thread 0x7fd3742e9280 (LWP 3036)):
#0  pthread_cond_timedwait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:225
#1  0x00007fd38449fac3 in ?? ()
#2  0x00007fd3742e80bc in ?? ()
#3  0x00007fd3880061b0 in ?? ()
#4  0x00007fd388006310 in ?? ()
#5  0xffffffff742e95e8 in ?? ()
#6  0x0000000057f4a4da in ?? ()
#7  0x0000000000000000 in ?? ()

Thread 39 (Thread 0x7fd37c2d1700 (LWP 3024)):
#0  pthread_cond_timedwait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:225
#1  0x00007fd37c30eac3 in ?? ()
#2  0x00007fd37c2d053c in ?? ()
#3  0x0000000000000000 in ?? ()


Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fd37c2d1700 (LWP 3024)]
0x00007fd37c30eac3 in ?? ()
(gdb) bt
#0  0x00007fd37c30eac3 in ?? ()
#1  0x00007fd37c2d053c in ?? ()
#2  0x0000000000000000 in ?? ()
(gdb) t a a bt



And the reason these theads resulted in crash is that -
* unexport of a volume calls glfs_fini.

And there are two code-paths which can result in this issue.

1) glfs_fini() -> glusterfs_graph_deactivate() -> xlator_tree_fini() -> io-threads/fini() -> GF_FREE (conf).

But this conf variable is accessed in those iot-worker threads

2) glfs_fini() -> glusterfs_ctx_destroy() -> xlator_tree_free_members() -> xlator_members_free() -> dlcose (xl->dlhandle) 

We shouldn't unload io-threads.so  as long as there is active thread using it. 


Now to address this issue, we need to fix io-thread->fini() to cleanup those threads before exiting. Since it could be an intricate fix,
we could try disabling io-threads till then.

@Shashank, Ambarish,

Could you please turn off performance/client-io-threads and re-try your tests.

Comment 12 Niels de Vos 2016-10-05 08:00:05 UTC
Nice work Soumya! It seems that this patch is not part of upstream glusterfs-3.8 so we did not have a chance on catching this problem with the upstream testing that was done :-(

Comment 13 Shashank Raj 2016-10-05 10:29:51 UTC
Thanks Soumya.

With client-io-threads option as on, i was able to reproduce both refresh-config crash and volume restart crash issue in one go.

[root@dhcp42-59 ~]# gluster vol get ozone all | grep client-io
performance.client-io-threads           on     

*******************************************************

After disabling client-io-threads on the volume:

[root@dhcp42-59 ~]# gluster vol set ozone performance.client-io-threads off
volume set: success

[root@dhcp42-59 ~]# gluster vol get ozone all | grep client-io
performance.client-io-threads           off 

i don't see any crashes while doing refresh-config multiple times.

******************************************************

But during volume restart i see ganesha crashing on one of the nodes with below messages in ganesha.log

05/10/2016 20:13:12 : epoch dcf30000 : dhcp42-96.lab.eng.blr.redhat.com : ganesha.nfsd-23973[dbus_heartbeat] unregister_fsal :FSAL :CRIT :Unregister FSAL GLUSTER with non-zero refcount=1
05/10/2016 20:13:12 : epoch dcf30000 : dhcp42-96.lab.eng.blr.redhat.com : ganesha.nfsd-23973[dbus_heartbeat] glusterfs_unload :FSAL :CRIT :FSAL Gluster unable to unload.  Dying ...

no bt is seen:

[Inferior 1 (process 6944) exited with code 02]
(gdb) bt
No stack.
(gdb) 

ganesha service status on one node:

Oct  5 21:04:42 dhcp43-96 systemd: nfs-ganesha.service: main process exited, code=exited, status=2/INVALIDARGUMENT
Oct  5 21:04:42 dhcp43-96 systemd: Unit nfs-ganesha.service entered failed state.
Oct  5 21:04:42 dhcp43-96 systemd: nfs-ganesha.service failed.

This is tried and crash is seen on both RHEL 6 and RHEL 7

******************************************************

But on RHEL 6, there is another observation:

everytime it crashes on one node, i see an unwanted entry getting created under exports folder.

[root@dhcp42-59 exports]# pwd
/var/run/gluster/shared_storage/nfs-ganesha/exports
[root@dhcp42-59 exports]# ls -ltr
total 1
----------. 1 root root   0 Oct  5 19:45 sedbj8MSj
----------. 1 root root   0 Oct  5 19:53 sedT7DQPC
----------. 1 root root   0 Oct  5 19:59 sedCneQYU
----------. 1 root root   0 Oct  5 20:01 sed1lcMM7
----------. 1 root root   0 Oct  5 20:02 sedSpRC1X
----------. 1 root root   0 Oct  5 20:02 sedAFz6LR
----------. 1 root root   0 Oct  5 20:13 sedVI3QsZ
----------. 1 root root 509 Oct  5 20:26 sed0uVXuV
----------. 1 root root   0 Oct  5 20:30 sedcl5uS8
-rw-r--r--. 1 root root 509 Oct  5 20:33 export.ozone.conf
----------. 1 root root   0 Oct  5 20:33 sedSFcZDV

*****************************************************

Comment 14 Soumya Koduri 2016-10-05 11:34:46 UTC
05/10/2016 20:13:12 : epoch dcf30000 : dhcp42-96.lab.eng.blr.redhat.com : ganesha.nfsd-23973[dbus_heartbeat] glusterfs_unload :FSAL :CRIT :FSAL Gluster unable to unload.  Dying ...

nfs-ganesha process may have exited since it is unable to unexport the volume. Strange that it happens on only one node. Anyways this is a different issue. Please file a new bug with steps to reproduce for this issue.

Comment 15 Soumya Koduri 2016-10-05 12:25:13 UTC
I have started discussion regarding this regression in the upstream ML as well -

http://www.gluster.org/pipermail/gluster-devel/2016-October/051121.html

We shall wait to hear on how intricate fixing io-threads->fini() shall be. Probably based on that we can decide if we need to revert the commit which enabled it by default.

Comment 21 surabhi 2016-12-13 07:50:27 UTC
With client-io-thread to default value which is on,tried executing refresh-config multiple times with and without changing values from volume conf file and the crash is not seen anymore.

Marking the BZ verified.

Comment 23 errata-xmlrpc 2017-03-23 06:06:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.