Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1668268

Summary: Unable to mount gluster volume
Product: [Community] GlusterFS Reporter: Poornima G <pgurusid>
Component: rpcAssignee: bugs <bugs>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-6.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-01-22 17:22:57 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Poornima G 2019-01-22 10:37:35 UTC
Description of problem:
When we try to mount gluster volume with master gluster and master gd2, it fails. The mount process crashes with the bt:
(gdb) bt
#0  0x00007ffff60f2207 in __GI_raise (sig=sig@entry=6)
    at ../nptl/sysdeps/unix/sysv/linux/raise.c:55
#1  0x00007ffff60f38f8 in __GI_abort () at abort.c:90
#2  0x00007ffff6134d27 in __libc_message (do_abort=do_abort@entry=2, 
    fmt=fmt@entry=0x7ffff6246678 "*** Error in `%s': %s: 0x%s ***\n")
    at ../sysdeps/unix/sysv/linux/libc_fatal.c:196
#3  0x00007ffff613d489 in malloc_printerr (ar_ptr=0x7fffe4000020, 
    ptr=<optimized out>, str=0x7ffff6246738 "double free or corruption (fasttop)", 
    action=3) at malloc.c:5004
#4  _int_free (av=0x7fffe4000020, p=<optimized out>, have_lock=0) at malloc.c:3843
#5  0x00007ffff7abae8e in dict_destroy (this=0x7fffe4001d48) at dict.c:700
#6  0x00007ffff7abafa0 in dict_unref (this=0x7fffe4001d48) at dict.c:739
#7  0x0000000000411151 in mgmt_getspec_cbk (req=0x7fffdc002a28, iov=0x7fffdc002a60, 
    count=1, myframe=0x7fffdc001de8) at glusterfsd-mgmt.c:2132
#8  0x00007ffff78694c2 in rpc_clnt_handle_reply (clnt=0x697960, 
    pollin=0x7fffe4001010) at rpc-clnt.c:755
#9  0x00007ffff78699eb in rpc_clnt_notify (trans=0x697ce0, mydata=0x697990, 
    event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7fffe4001010) at rpc-clnt.c:922
#10 0x00007ffff7865a3e in rpc_transport_notify (this=0x697ce0, 
    event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7fffe4001010) at rpc-transport.c:541
#11 0x00007fffeba1e6eb in socket_event_poll_in (this=0x697ce0, notify_handled=true)
    at socket.c:2508
#12 0x00007fffeba1f703 in socket_event_handler (fd=13, idx=0, gen=1, data=0x697ce0, 
    poll_in=1, poll_out=0, poll_err=0, event_thread_died=0 '\000') at socket.c:2908
#13 0x00007ffff7b453dc in event_dispatch_epoll_handler (event_pool=0x673e90, 
    event=0x7fffe9f78e80) at event-epoll.c:642
#14 0x00007ffff7b458f8 in event_dispatch_epoll_worker (data=0x6d6de0)
    at event-epoll.c:756
#15 0x00007ffff68f1dd5 in start_thread (arg=0x7fffe9f79700) at pthread_create.c:307
#16 0x00007ffff61b9ead in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb) 


Version-Release number of selected component (if applicable):
master gluster
master glusterd2

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Worker Ant 2019-01-22 10:48:27 UTC
REVIEW: https://review.gluster.org/22076 (rpc: Fix double free) posted (#1) for review on master by Poornima G

Comment 2 Worker Ant 2019-01-22 17:22:57 UTC
REVIEW: https://review.gluster.org/22076 (rpc: Fix double free) merged (#3) on master by Shyamsundar Ranganathan

Comment 3 Shyamsundar 2019-03-25 16:33:11 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/