Bug 804607 - Invalid reads found in valgrind logs during locktests
Invalid reads found in valgrind logs during locktests
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: protocol (Show other bugs)
mainline
Unspecified Unspecified
unspecified Severity urgent
: ---
: ---
Assigned To: Vijay Bellur
:
Depends On:
Blocks: 817967
  Show dependency treegraph
 
Reported: 2012-03-19 08:16 EDT by Shwetha Panduranga
Modified: 2013-07-24 13:24 EDT (History)
2 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-24 13:24:28 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Shwetha Panduranga 2012-03-19 08:16:46 EDT
Description of problem:

mount log:-
--------------
[2012-03-19 20:05:47.224280] I [client-lk.c:610:decrement_reopen_fd_count] 0-dstore-client-3: last fd open'd/lock-self-heal'd - notifying CHILD-UP
[2012-03-19 20:05:47.225669] I [client3_1-fops.c:2296:client_fdctx_destroy] 0-dstore-client-3: sending release on fd
[2012-03-19 20:05:47.229143] C [mem-pool.c:541:mem_put] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0x211) [0x4eb09fc] (-->/usr/local/lib/glusterfs/3.3.0qa2
9/xlator/protocol/client.so(client3_1_release_cbk+0x36) [0x95bc0d5] (-->/usr/local/lib/glusterfs/3.3.0qa29/xlator/protocol/client.so(+0x152ac) [0x95b12ac]))) 0-mem-poo
l: mem_put called on freed ptr 0x5482434 of mem pool 0x54070b0

valgrind log:-
----------------
==14013== Invalid read of size 8
==14013==    at 0x4C5428D: call_resume (call-stub.c:3935)
==14013==    by 0xA3008FF: qr_open_cbk (quick-read.c:615)
==14013==    by 0xA0EBD6B: ioc_open_cbk (io-cache.c:593)
==14013==    by 0x9EDB98D: ra_open_cbk (read-ahead.c:119)
==14013==    by 0x9CCDBCA: wb_open_cbk (write-behind.c:1419)
==14013==    by 0x9AAB950: dht_open_cbk (dht-inode-read.c:64)
==14013==    by 0x980D256: afr_open_cbk (afr-open.c:188)
==14013==    by 0x95B3005: client3_1_open_cbk (client3_1-fops.c:381)
==14013==    by 0x4EB09FB: rpc_clnt_handle_reply (rpc-clnt.c:797)
==14013==    by 0x4EB0D98: rpc_clnt_notify (rpc-clnt.c:916)
==14013==    by 0x4EACE7B: rpc_transport_notify (rpc-transport.c:498)
==14013==    by 0x877726F: socket_event_poll_in (socket.c:1686)
==14013==  Address 0x11f2fa34 is 100 bytes inside a block of size 212 free'd
==14013==    at 0x4A0595D: free (vg_replace_malloc.c:366)
==14013==    by 0x4C5A220: __gf_free (mem-pool.c:316)
==14013==    by 0x4C5AB33: mem_put (mem-pool.c:568)
==14013==    by 0x95B11B9: FRAME_DESTROY (stack.h:155)
==14013==    by 0x95B128F: STACK_DESTROY (stack.h:183)
==14013==    by 0x95BC0D4: client3_1_release_cbk (client3_1-fops.c:2238)
==14013==    by 0x4EB09FB: rpc_clnt_handle_reply (rpc-clnt.c:797)
==14013==    by 0x4EB0D98: rpc_clnt_notify (rpc-clnt.c:916)
==14013==    by 0x4EACE7B: rpc_transport_notify (rpc-transport.c:498)
==14013==    by 0x877726F: socket_event_poll_in (socket.c:1686)
==14013==    by 0x87777F3: socket_event_handler (socket.c:1801)
==14013==    by 0x4C5900B: event_dispatch_epoll_handler (event.c:794)


Version-Release number of selected component (if applicable):
3.3.0qa29

How reproducible:
often

Steps to Reproduce:
1.Create a distribute-replicate volume (1x3)
2.create fuse mount
3.start "locktests -f ./locktests_file -n 500"
4.Bring down a brick
5.Bring back the brick online.

Actual results:
locktests crashed 

Additional info:
Backtrace:

(gdb) t 9
[Switching to thread 9 (Thread 14013)]#0  0x0000003f7220f36b in raise () from /lib64/libpthread.so.0
(gdb) bt
#0  0x0000003f7220f36b in raise () from /lib64/libpthread.so.0
#1  0x0000000004c3a8c7 in gf_print_trace (signum=11) at common-utils.c:437
#2  <signal handler called>
#3  0x000000000675de42 in fuse_setlk_cbk (frame=0x5494890, cookie=0x53d1b14, this=0x569dc50, op_ret=-1, op_errno=22, lock=0x0) at fuse-bridge.c:3131
#4  0x000000000a731dd7 in io_stats_lk_cbk (frame=0x53d1b14, cookie=0x673684c, this=0x56b26b0, op_ret=-1, op_errno=22, lock=0x0) at io-stats.c:1720
#5  0x0000000004c2c526 in default_lk_cbk (frame=0x673684c, cookie=0xef3086c, this=0x56b12a0, op_ret=-1, op_errno=22, lock=0x0) at defaults.c:343
#6  0x000000000a30fd53 in qr_lk_helper (frame=0xef3086c, this=0x659cdc0, fd=0xcb765f8, cmd=6, lock=0x63539f0) at quick-read.c:2981
#7  0x0000000004c4cf3a in call_resume_wind (stub=0x63539a8) at call-stub.c:2409
#8  0x0000000004c542ac in call_resume (stub=0x63539a8) at call-stub.c:3938
#9  0x000000000a300900 in qr_open_cbk (frame=0x5481338, cookie=0xef3098c, this=0x659cdc0, op_ret=0, op_errno=117, fd=0xcb765f8) at quick-read.c:615
#10 0x000000000a0ebd6c in ioc_open_cbk (frame=0xef3098c, cookie=0xec7315c, this=0x659bb30, op_ret=0, op_errno=117, fd=0xcb765f8) at io-cache.c:593
#11 0x0000000009edb98e in ra_open_cbk (frame=0xec7315c, cookie=0xec7327c, this=0x659a8a0, op_ret=0, op_errno=117, fd=0xcb765f8) at read-ahead.c:119
#12 0x0000000009ccdbcb in wb_open_cbk (frame=0xec7327c, cookie=0xef59bbc, this=0x6599600, op_ret=0, op_errno=117, fd=0xcb765f8) at write-behind.c:1419
#13 0x0000000009aab951 in dht_open_cbk (frame=0xef59bbc, cookie=0xef59cdc, this=0x56aede0, op_ret=0, op_errno=117, fd=0xcb765f8) at dht-inode-read.c:64
#14 0x000000000980d257 in afr_open_cbk (frame=0xef59cdc, cookie=0x1, this=0x56adf40, op_ret=0, op_errno=0, fd=0xcb765f8) at afr-open.c:188
#15 0x00000000095b3006 in client3_1_open_cbk (req=0xc91f2e0, iov=0xc91f320, count=1, myframe=0x10a883ec) at client3_1-fops.c:381
#16 0x0000000004eb09fc in rpc_clnt_handle_reply (clnt=0x56df910, pollin=0x13f73350) at rpc-clnt.c:797
#17 0x0000000004eb0d99 in rpc_clnt_notify (trans=0x56dfd70, mydata=0x56df940, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x13f73350) at rpc-clnt.c:916
#18 0x0000000004eace7c in rpc_transport_notify (this=0x56dfd70, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x13f73350) at rpc-transport.c:498
#19 0x0000000008777270 in socket_event_poll_in (this=0x56dfd70) at socket.c:1686
#20 0x00000000087777f4 in socket_event_handler (fd=13, idx=5, data=0x56dfd70, poll_in=1, poll_out=0, poll_err=0) at socket.c:1801
#21 0x0000000004c5900c in event_dispatch_epoll_handler (event_pool=0x52facd0, events=0x56a1c20, i=0) at event.c:794
#22 0x0000000004c5922f in event_dispatch_epoll (event_pool=0x52facd0) at event.c:856
#23 0x0000000004c595ba in event_dispatch (event_pool=0x52facd0) at event.c:956
#24 0x0000000000408057 in main (argc=4, argv=0x7ff000548) at glusterfsd.c:1647
Comment 1 Anand Avati 2012-03-19 08:58:40 EDT
CHANGE: http://review.gluster.com/2978 (protocol/client: Avoid STACK_DESTROYing more than once in RELEASE fops.) merged in master by Vijay Bellur (vijay@gluster.com)
Comment 2 Shwetha Panduranga 2012-05-04 07:50:14 EDT
Bug is fixed . verified on 3.3.0qa39

Note You need to log in before you can comment on or make changes to this bug.