Bug 849136 - Crash in inode_path
Crash in inode_path
Status: CLOSED DUPLICATE of bug 826080
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs-rdma (Show other bugs)
2.0
Unspecified Unspecified
high Severity high
: ---
: ---
Assigned To: Raghavendra G
Sachidananda Urs
: Triaged
Depends On: 824533 826080
Blocks: 858456
  Show dependency treegraph
 
Reported: 2012-08-17 07:59 EDT by Vidya Sakar
Modified: 2013-03-03 21:06 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 824533
: 858456 (view as bug list)
Environment:
Last Closed: 2012-11-16 01:14:56 EST
Type: Bug
Regression: ---
Mount Type: fuse
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Vidya Sakar 2012-08-17 07:59:23 EDT
+++ This bug was initially created as a clone of Bug #824533 +++

Description of problem: While deleting files from fuse mount, I got this crash.

Volume Name: rdma
Type: Distributed-Replicate
Volume ID: ac8032f9-d169-4992-9288-393a99122227
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: rdma
Bricks:
Brick1: 10.16.157.95:/home/s0
Brick2: 10.16.157.97:/home/s0
Brick3: 10.16.157.99:/home/s0
Brick4: 10.16.157.101:/home/s0


Version-Release number of selected component (if applicable): 3.3.0qa42


How reproducible: Consistently


Additional info:

 gdb glusterfs -c /core.11262

(gdb) bt
#0  0x0000003fe0c32885 in raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1  0x0000003fe0c34065 in abort () at abort.c:92
#2  0x0000003fe0c2b9fe in __assert_fail_base (fmt=<value optimized out>, assertion=0x7fd050f77628 "0", file=0x7fd050f77477 "inode.c", 
    line=<value optimized out>, function=<value optimized out>) at assert.c:96
#3  0x0000003fe0c2bac0 in __assert_fail (assertion=0x7fd050f77628 "0", file=0x7fd050f77477 "inode.c", line=1090, 
    function=0x7fd050f777a0 "__inode_path") at assert.c:105
#4  0x00007fd050f33d6e in __inode_path (inode=0x7fd040991bdc, name=0x0, bufp=0x7fd041d80808) at inode.c:1090
#5  0x00007fd050f34156 in inode_path (inode=0x7fd040991bdc, name=0x0, bufp=0x7fd041d80808) at inode.c:1191
#6  0x00007fd04c774aef in protocol_client_reopendir (this=0x130e700, fdctx=0x7fd03c029940) at client-handshake.c:1096
#7  0x00007fd04c77532a in client_post_handshake (frame=0x7fd04fb4f84c, this=0x130e700) at client-handshake.c:1281
#8  0x00007fd04c775b6a in client_setvolume_cbk (req=0x7fd04126a1c4, iov=0x7fd04126a204, count=1, myframe=0x7fd04fb4f84c)
    at client-handshake.c:1439
#9  0x00007fd050cf6a38 in rpc_clnt_handle_reply (clnt=0x2997e80, pollin=0x7fd03c026550) at rpc-clnt.c:788
#10 0x00007fd050cf6dd5 in rpc_clnt_notify (trans=0x2993cc0, mydata=0x2997eb0, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7fd03c026550)
    at rpc-clnt.c:907
#11 0x00007fd050cf2eb8 in rpc_transport_notify (this=0x2993cc0, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7fd03c026550)
    at rpc-transport.c:489
#12 0x00007fd04531c760 in gf_rdma_pollin_notify (peer=0x299b198, post=0x220b2c0) at rdma.c:3100
#13 0x00007fd04531cadc in gf_rdma_recv_reply (peer=0x299b198, post=0x220b2c0) at rdma.c:3187
#14 0x00007fd04531ce1f in gf_rdma_process_recv (peer=0x299b198, wc=0x7fd041d80e40) at rdma.c:3277
#15 0x00007fd04531d0f0 in gf_rdma_recv_completion_proc (data=0x13058f0) at rdma.c:3362
#16 0x0000003fe10077f1 in start_thread (arg=0x7fd041d81700) at pthread_create.c:301
#17 0x0000003fe0ce570d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

--- Additional comment from amarts@redhat.com on 2012-07-11 07:07:55 EDT ---

mostly happened because of RDMA disconnects (and also the 'lock-self-heal' was on).
Comment 2 Amar Tumballi 2012-08-23 02:44:49 EDT
This bug is not seen in current master branch (which will get branched as RHS 2.1.0 soon). To consider it for fixing, want to make sure this bug still exists in RHS servers. If not reproduced, would like to close this.
Comment 3 Raghavendra G 2012-11-16 01:14:56 EST

*** This bug has been marked as a duplicate of bug 826080 ***

Note You need to log in before you can comment on or make changes to this bug.