Bug 1328795 - glusterd related core observed while running automated suite for nfs-ganesha.
Summary: glusterd related core observed while running automated suite for nfs-ganesha.
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Atin Mukherjee
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
: 1454610 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-20 09:59 UTC by Shashank Raj
Modified: 2017-05-23 10:38 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-04-20 10:48:13 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Shashank Raj 2016-04-20 09:59:48 UTC
Description of problem:
glusterd related core observed while running automated suite for nfs-ganesha.

Version-Release number of selected component (if applicable):
glusterfs-3.7.9-1
ganesha-2.3.1-3

How reproducible:
Once

Steps to Reproduce:
1.Create a 4 node cluster and configure ganesha on it.
2.Not sure of the exact steps which caused this issue but i was executing gluster operations test cases on ganesha  configuration. 
3.below trace has been observed

(gdb) bt
#0  0x00007f56eb9130ad in rcu_read_lock_bp () from /lib64/liburcu-bp.so.1
#1  0x00007f56ebede320 in __glusterd_peer_rpc_notify (rpc=rpc@entry=0x7f56f8b6c8b0, mydata=mydata@entry=0x7f56f8b6c1f0,
    event=event@entry=RPC_CLNT_DISCONNECT, data=data@entry=0x0) at glusterd-handler.c:5144
#2  0x00007f56ebed3eec in glusterd_big_locked_notify (rpc=0x7f56f8b6c8b0, mydata=0x7f56f8b6c1f0, event=RPC_CLNT_DISCONNECT, data=0x0,
    notify_fn=0x7f56ebede2d0 <__glusterd_peer_rpc_notify>) at glusterd-handler.c:71
#3  0x00007f56f714bbf0 in rpc_clnt_notify (trans=<optimized out>, mydata=0x7f56f8b6c8e0, event=RPC_TRANSPORT_DISCONNECT, data=0x7f56f8b6fa50)
    at rpc-clnt.c:872
#4  0x00007f56f7147823 in rpc_transport_notify (this=this@entry=0x7f56f8b6fa50, event=event@entry=RPC_TRANSPORT_DISCONNECT,
    data=data@entry=0x7f56f8b6fa50) at rpc-transport.c:546
#5  0x00007f56e9369272 in socket_event_poll_err (this=0x7f56f8b6fa50) at socket.c:1152
#6  socket_event_handler (fd=fd@entry=8, idx=idx@entry=2, data=0x7f56f8b6fa50, poll_in=1, poll_out=0, poll_err=<optimized out>) at socket.c:2357
#7  0x00007f56f73de9ca in event_dispatch_epoll_handler (event=0x7f56e751ae80, event_pool=0x7f56f8b02d10) at event-epoll.c:575
#8  event_dispatch_epoll_worker (data=0x7f56f8b16060) at event-epoll.c:678
#9  0x00007f56f61e5dc5 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f56f5b2c1cd in clone () from /lib64/libc.so.6


Actual results:

glusterd related core observed while running automated suite for nfs-ganesha.

Expected results:

there should not be any cores

Additional info:

sos reports will be attached.

Comment 2 Shashank Raj 2016-04-20 10:04:34 UTC
sosreports are placed under http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1328795

Comment 3 Shashank Raj 2016-04-20 10:06:52 UTC
core file is also under http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1328795

Comment 4 Atin Mukherjee 2016-04-20 10:48:13 UTC
This is already known and have been reported earlier. In cleanup_and_exit we do not wait for the threads to complete and release the resources and at the same point of time one of the thread can try to access it and result into a crash. This is more of a generic issue with our fini (). I am closing this bug as we don't have plan to fix it and it doesn't have any functionality impact as the same is seen only when glusterd is shut down.

Comment 7 Atin Mukherjee 2017-05-23 10:38:43 UTC
*** Bug 1454610 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.