Bug 1231425 - use after free bug in dht
Summary: use after free bug in dht
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1233042 1233046
TreeView+ depends on / blocked
 
Reported: 2015-06-13 08:18 UTC by Pranith Kumar K
Modified: 2016-06-16 13:11 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1233042 (view as bug list)
Environment:
Last Closed: 2016-06-16 13:11:23 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Pranith Kumar K 2015-06-13 08:18:28 UTC
Description of problem:
While running parallel directory creation and deletion, Address sanitizer showed the following use-after-free issue in dht. For loop should break immediately after it wound as many times as the call_cnt.
root@localhost - ~ 
12:36:37 :) ⚡ ==30031== WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
=================================================================
==30031== ERROR: AddressSanitizer: heap-use-after-free on address 0x60520059819c at pc 0x7ffb608863d4 bp 0x7ffb619acfe0 sp 0x7ffb619acfd0
READ of size 4 at 0x60520059819c thread T5
    #0 0x7ffb608863d3 in dht_unlock_inodelk /home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:1568
    #1 0x7ffb608a1dfc in dht_selfheal_dir_finish /home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-selfheal.c:114
    #2 0x7ffb608a550f in dht_selfheal_dir_xattr_cbk /home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-selfheal.c:627
    #3 0x7ffb6c64feef in default_setxattr_cbk /home/pk1/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:1089
    #4 0x7ffb60c1ede9 in ec_xattr_cbk /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:93
    #5 0x7ffb60c1f339 in ec_manager_xattr /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:165
    #6 0x7ffb60bdd5ab in __ec_manager /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-common.c:1879
    #7 0x7ffb60bd379d in ec_resume /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-common.c:298
    #8 0x7ffb60c357a2 in ec_combine /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-combine.c:933
    #9 0x7ffb60c1e183 in ec_inode_write_cbk /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:60
    #10 0x7ffb60c23e6d in ec_setxattr_cbk /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:661
    #11 0x7ffb60f3fea3 in client3_3_setxattr_cbk /home/pk1/workspace/rhs-glusterfs/xlators/protocol/client/src/client-rpc-fops.c:1034
    #12 0x7ffb6c3b1c97 in rpc_clnt_handle_reply /home/pk1/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-clnt.c:766
    #13 0x7ffb6c3b23bc in rpc_clnt_notify /home/pk1/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-clnt.c:894
    #14 0x7ffb6c3a9dad in rpc_transport_notify /home/pk1/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-transport.c:543
    #15 0x7ffb61ef60f3 in socket_event_poll_in /home/pk1/workspace/rhs-glusterfs/rpc/rpc-transport/socket/src/socket.c:2290
    #16 0x7ffb61ef6bef in socket_event_handler /home/pk1/workspace/rhs-glusterfs/rpc/rpc-transport/socket/src/socket.c:2403
    #17 0x7ffb6c745e6f in event_dispatch_epoll_handler /home/pk1/workspace/rhs-glusterfs/libglusterfs/src/event-epoll.c:572
    #18 0x7ffb6c74663b in event_dispatch_epoll_worker /home/pk1/workspace/rhs-glusterfs/libglusterfs/src/event-epoll.c:674
    #19 0x7ffb6ca0cbb7 (/lib64/libasan.so.0+0x19bb7)
    #20 0x3cf4207ee4 in start_thread (/lib64/libpthread.so.0+0x3cf4207ee4)
    #21 0x3cf3ef4d1c in __clone (/lib64/libc.so.6+0x3cf3ef4d1c)
0x60520059819c is located 2076 bytes inside of 2100-byte region [0x605200597980,0x6052005981b4)
freed by thread T5 here:
    #0 0x7ffb6ca090f9 (/lib64/libasan.so.0+0x160f9)
    #1 0x7ffb6c6c9d78 in __gf_free /home/pk1/workspace/rhs-glusterfs/libglusterfs/src/mem-pool.c:335
    #2 0x7ffb6c6cab03 in mem_put /home/pk1/workspace/rhs-glusterfs/libglusterfs/src/mem-pool.c:587
    #3 0x7ffb6087db2a in dht_local_wipe /home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:446
    #4 0x7ffb6087d0ed in dht_lock_stack_destroy /home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:276
    #5 0x7ffb608849a2 in dht_inodelk_done /home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:1421
    #6 0x7ffb60884f5a in dht_unlock_inodelk_cbk /home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:1498
    #7 0x7ffb6c652200 in default_inodelk_cbk /home/pk1/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:1176
    #8 0x7ffb60bf299e in ec_manager_inodelk /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-locks.c:649
    #9 0x7ffb60bdd5ab in __ec_manager /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-common.c:1879
    #10 0x7ffb60bdd7c4 in ec_manager /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-common.c:1896
    #11 0x7ffb60bf3962 in ec_inodelk /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-locks.c:773
    #12 0x7ffb60bc7c5e in ec_gf_inodelk /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec.c:785
    #13 0x7ffb60886358 in dht_unlock_inodelk /home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:1573
    #14 0x7ffb608a1dfc in dht_selfheal_dir_finish /home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-selfheal.c:114
    #15 0x7ffb608a550f in dht_selfheal_dir_xattr_cbk /home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-selfheal.c:627
    #16 0x7ffb6c64feef in default_setxattr_cbk /home/pk1/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:1089
    #17 0x7ffb60c1ede9 in ec_xattr_cbk /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:93
    #18 0x7ffb60c1f339 in ec_manager_xattr /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:165
    #19 0x7ffb60bdd5ab in __ec_manager /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-common.c:1879
    #20 0x7ffb60bd379d in ec_resume /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-common.c:298
    #21 0x7ffb60c357a2 in ec_combine /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-combine.c:933
    #22 0x7ffb60c1e183 in ec_inode_write_cbk /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:60
    #23 0x7ffb60c23e6d in ec_setxattr_cbk /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:661
    #24 0x7ffb60f3fea3 in client3_3_setxattr_cbk /home/pk1/workspace/rhs-glusterfs/xlators/protocol/client/src/client-rpc-fops.c:1034
    #25 0x7ffb6c3b1c97 in rpc_clnt_handle_reply /home/pk1/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-clnt.c:766
    #26 0x7ffb6c3b23bc in rpc_clnt_notify /home/pk1/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-clnt.c:894
    #27 0x7ffb6c3a9dad in rpc_transport_notify /home/pk1/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-transport.c:543
    #28 0x7ffb61ef60f3 in socket_event_poll_in /home/pk1/workspace/rhs-glusterfs/rpc/rpc-transport/socket/src/socket.c:2290
    #29 0x7ffb61ef6bef in socket_event_handler /home/pk1/workspace/rhs-glusterfs/rpc/rpc-transport/socket/src/socket.c:2403
previously allocated by thread T5 here:
    #0 0x7ffb6ca09315 (/lib64/libasan.so.0+0x16315)
    #1 0x7ffb6c6c8a92 in __gf_calloc /home/pk1/workspace/rhs-glusterfs/libglusterfs/src/mem-pool.c:116
    #2 0x7ffb6c6ca5fa in mem_get /home/pk1/workspace/rhs-glusterfs/libglusterfs/src/mem-pool.c:484
    #3 0x7ffb6c6ca165 in mem_get0 /home/pk1/workspace/rhs-glusterfs/libglusterfs/src/mem-pool.c:418
    #4 0x7ffb6087dbba in dht_local_init /home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:457
    #5 0x7ffb6087d49f in dht_local_lock_init /home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:367
    #6 0x7ffb60885463 in dht_unlock_inodelk /home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:1550
    #7 0x7ffb608a1dfc in dht_selfheal_dir_finish /home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-selfheal.c:114
    #8 0x7ffb608a550f in dht_selfheal_dir_xattr_cbk /home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-selfheal.c:627
    #9 0x7ffb6c64feef in default_setxattr_cbk /home/pk1/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:1089
    #10 0x7ffb60c1ede9 in ec_xattr_cbk /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:93
    #11 0x7ffb60c1f339 in ec_manager_xattr /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:165
    #12 0x7ffb60bdd5ab in __ec_manager /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-common.c:1879
    #13 0x7ffb60bd379d in ec_resume /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-common.c:298
    #14 0x7ffb60c357a2 in ec_combine /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-combine.c:933
    #15 0x7ffb60c1e183 in ec_inode_write_cbk /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:60
    #16 0x7ffb60c23e6d in ec_setxattr_cbk /home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:661
    #17 0x7ffb60f3fea3 in client3_3_setxattr_cbk /home/pk1/workspace/rhs-glusterfs/xlators/protocol/client/src/client-rpc-fops.c:1034
    #18 0x7ffb6c3b1c97 in rpc_clnt_handle_reply /home/pk1/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-clnt.c:766
    #19 0x7ffb6c3b23bc in rpc_clnt_notify /home/pk1/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-clnt.c:894
    #20 0x7ffb6c3a9dad in rpc_transport_notify /home/pk1/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-transport.c:543
    #21 0x7ffb61ef60f3 in socket_event_poll_in /home/pk1/workspace/rhs-glusterfs/rpc/rpc-transport/socket/src/socket.c:2290
    #22 0x7ffb61ef6bef in socket_event_handler /home/pk1/workspace/rhs-glusterfs/rpc/rpc-transport/socket/src/socket.c:2403
    #23 0x7ffb6c745e6f in event_dispatch_epoll_handler /home/pk1/workspace/rhs-glusterfs/libglusterfs/src/event-epoll.c:572
    #24 0x7ffb6c74663b in event_dispatch_epoll_worker /home/pk1/workspace/rhs-glusterfs/libglusterfs/src/event-epoll.c:674
    #25 0x7ffb6ca0cbb7 (/lib64/libasan.so.0+0x19bb7)
Thread T5 created by T0 here:
    #0 0x7ffb6c9fdd2a (/lib64/libasan.so.0+0xad2a)
    #1 0x7ffb6c7468db in event_dispatch_epoll /home/pk1/workspace/rhs-glusterfs/libglusterfs/src/event-epoll.c:728
    #2 0x7ffb6c6c711f in event_dispatch /home/pk1/workspace/rhs-glusterfs/libglusterfs/src/event.c:127
    #3 0x40ecf3 in main /home/pk1/workspace/rhs-glusterfs/glusterfsd/src/glusterfsd.c:2333
    #4 0x3cf3e21d64 in __libc_start_main (/lib64/libc.so.6+0x3cf3e21d64)
SUMMARY: AddressSanitizer: heap-use-after-free /home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:1568 dht_unlock_inodelk
Shadow bytes around the buggy address:
  0x0c0ac00aafe0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c0ac00aaff0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c0ac00ab000: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c0ac00ab010: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c0ac00ab020: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
=>0x0c0ac00ab030: fd fd fd[fd]fd fd fd fa fa fa fa fa fa fa fa fa
  0x0c0ac00ab040: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c0ac00ab050: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c0ac00ab060: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c0ac00ab070: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c0ac00ab080: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:     fa
  Heap righ redzone:     fb
  Freed Heap region:     fd
  Stack left redzone:    f1
  Stack mid redzone:     f2
  Stack right redzone:   f3
  Stack partial redzone: f4
  Stack after return:    f5
  Stack use after scope: f8
  Global redzone:        f9
  Global init order:     f6
  Poisoned by user:      f7
  ASan internal:         fe
==30031== ABORTING


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Anand Avati 2015-06-13 12:06:22 UTC
REVIEW: http://review.gluster.org/11209 (cluster/dht: Prevent use after free bug) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

Comment 2 Anand Avati 2015-06-15 06:53:04 UTC
REVIEW: http://review.gluster.org/11209 (cluster/dht: Prevent use after free bug) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

Comment 3 Anand Avati 2015-06-16 04:08:34 UTC
REVIEW: http://review.gluster.org/11209 (cluster/dht: Prevent use after free bug) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

Comment 4 Anand Avati 2015-06-17 12:14:16 UTC
COMMIT: http://review.gluster.org/11209 committed in master by Raghavendra G (rgowdapp@redhat.com) 
------
commit 1cc500f48005d8682f39f7c6355170df569c7603
Author: Pranith Kumar K <pkarampu@redhat.com>
Date:   Sat Jun 13 17:33:14 2015 +0530

    cluster/dht: Prevent use after free bug
    
    Change-Id: I2d1f5bb2dd27f6cea52c059b4ff08ca0fa63b140
    BUG: 1231425
    Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
    Reviewed-on: http://review.gluster.org/11209
    Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
    Tested-by: Raghavendra G <rgowdapp@redhat.com>

Comment 5 Nagaprasad Sathyanarayana 2015-10-25 15:20:35 UTC
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.

Comment 6 Niels de Vos 2016-06-16 13:11:23 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.