Bug 1464453 - Fuse mount crashed with continuous dd on a file and reading the file in parallel
Fuse mount crashed with continuous dd on a file and reading the file in parallel
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: sharding (Show other bugs)
3.3
x86_64 Linux
unspecified Severity high
: ---
: RHGS 3.3.0
Assigned To: Pranith Kumar K
SATHEESARAN
3.3.0-devel-freeze-exception
:
Depends On:
Blocks: 1466618 1417151 1466037
  Show dependency treegraph
 
Reported: 2017-06-23 09:03 EDT by SATHEESARAN
Modified: 2017-09-21 01:02 EDT (History)
7 users (show)

See Also:
Fixed In Version: glusterfs-3.8.4-32
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1466037 (view as bug list)
Environment:
Last Closed: 2017-09-21 01:02:13 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
coredump (2.64 MB, application/x-gzip)
2017-06-23 09:27 EDT, SATHEESARAN
no flags Details
fuse mount log (23.97 KB, application/x-gzip)
2017-06-23 09:27 EDT, SATHEESARAN
no flags Details

  None (edit)
Description SATHEESARAN 2017-06-23 09:03:10 EDT
Description of problem:
------------------------
Fuse mount got crashed after sometime , when 'dd' was continuously writing to the file on fuse mounted the gluster volume and 'cat' was reading the file.

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHGS 3.3.0 interim ( glusterfs-3.8.4-29.el7rhgs )

How reproducible:
-----------------
Always

Steps to Reproduce:
--------------------
1. Create a replica 3 volume, optimize it for virt store, start it
2. Fuse mount the volume on the RHEL 7.3 client
3. From the mount start continuous 'dd' to a file in a loop
4. From the same mount perform read on the same file using 'cat'

Actual results:
----------------
Fuse mount process got crashed and coredumped

Expected results:
-----------------
Fuse mount process should not crash
Comment 1 Pranith Kumar K 2017-06-23 09:05:05 EDT
This is the asan trace. Will take a look and update the bz with RC etc in a day. Will be away for some time now.

root@dhcp35-190 - ~ 
18:25:38 :) ⚡ /usr/local/sbin/glusterfs --volfile-server=localhost.localdomain --volfile-id=/r3 /mnt/r3 -N
==388==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
=================================================================
==388==ERROR: AddressSanitizer: heap-use-after-free on address 0x611001aa7940 at pc 0x7f187f51023c bp 0x7f187d647da0 sp 0x7f187d647d90
WRITE of size 8 at 0x611001aa7940 thread T8
    #0 0x7f187f51023b in list_del ../../../../libglusterfs/src/list.h:76
    #1 0x7f187f51037f in list_move_tail ../../../../libglusterfs/src/list.h:106
    #2 0x7f187f515850 in __shard_update_shards_inode_list /home/pk/workspace/rhs-glusterfs/xlators/features/shard/src/shard.c:540
    #3 0x7f187f519a88 in shard_common_resolve_shards /home/pk/workspace/rhs-glusterfs/xlators/features/shard/src/shard.c:653
    #4 0x7f187f51c9a9 in shard_refresh_dot_shard /home/pk/workspace/rhs-glusterfs/xlators/features/shard/src/shard.c:884
    #5 0x7f187f556eed in shard_post_lookup_readv_handler /home/pk/workspace/rhs-glusterfs/xlators/features/shard/src/shard.c:3550
    #6 0x7f187f520f27 in shard_lookup_base_file /home/pk/workspace/rhs-glusterfs/xlators/features/shard/src/shard.c:1197
    #7 0x7f187f55861a in shard_readv /home/pk/workspace/rhs-glusterfs/xlators/features/shard/src/shard.c:3609
    #8 0x7f187f2ea89d in wb_readv_helper /home/pk/workspace/rhs-glusterfs/xlators/performance/write-behind/src/write-behind.c:1689
    #9 0x7f1890e4a629 in call_resume_wind /home/pk/workspace/rhs-glusterfs/libglusterfs/src/call-stub.c:2039
    #10 0x7f1890e73bf9 in call_resume_keep_stub /home/pk/workspace/rhs-glusterfs/libglusterfs/src/call-stub.c:2578
    #11 0x7f187f2e8229 in wb_do_winds /home/pk/workspace/rhs-glusterfs/xlators/performance/write-behind/src/write-behind.c:1524
    #12 0x7f187f2e8516 in wb_process_queue /home/pk/workspace/rhs-glusterfs/xlators/performance/write-behind/src/write-behind.c:1558
    #13 0x7f187f2ea9b4 in wb_readv /home/pk/workspace/rhs-glusterfs/xlators/performance/write-behind/src/write-behind.c:1715
    #14 0x7f1890fa8ce2 in default_readv /home/pk/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:2345
    #15 0x7f1890f8fec6 in default_readv_resume /home/pk/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:1687
    #16 0x7f1890e4a629 in call_resume_wind /home/pk/workspace/rhs-glusterfs/libglusterfs/src/call-stub.c:2039
    #17 0x7f1890e7363f in call_resume /home/pk/workspace/rhs-glusterfs/libglusterfs/src/call-stub.c:2508
    #18 0x7f187eeb64dc in open_and_resume /home/pk/workspace/rhs-glusterfs/xlators/performance/open-behind/src/open-behind.c:245
    #19 0x7f187eeb8085 in ob_readv /home/pk/workspace/rhs-glusterfs/xlators/performance/open-behind/src/open-behind.c:401
    #20 0x7f187ec7e548 in io_stats_readv /home/pk/workspace/rhs-glusterfs/xlators/debug/io-stats/src/io-stats.c:2913
    #21 0x7f1890fa8ce2 in default_readv /home/pk/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:2345
    #22 0x7f187ea1308d in meta_readv /home/pk/workspace/rhs-glusterfs/xlators/meta/src/meta.c:74
    #23 0x7f1884f1a1bd in fuse_readv_resume /home/pk/workspace/rhs-glusterfs/xlators/mount/fuse/src/fuse-bridge.c:2246
    #24 0x7f1884efb36d in fuse_fop_resume /home/pk/workspace/rhs-glusterfs/xlators/mount/fuse/src/fuse-bridge.c:556
    #25 0x7f1884ef5b4b in fuse_resolve_done /home/pk/workspace/rhs-glusterfs/xlators/mount/fuse/src/fuse-resolve.c:663
    #26 0x7f1884ef5ce7 in fuse_resolve_all /home/pk/workspace/rhs-glusterfs/xlators/mount/fuse/src/fuse-resolve.c:690
    #27 0x7f1884ef5b2c in fuse_resolve /home/pk/workspace/rhs-glusterfs/xlators/mount/fuse/src/fuse-resolve.c:654
    #28 0x7f1884ef5c94 in fuse_resolve_all /home/pk/workspace/rhs-glusterfs/xlators/mount/fuse/src/fuse-resolve.c:686
    #29 0x7f1884ef5d45 in fuse_resolve_continue /home/pk/workspace/rhs-glusterfs/xlators/mount/fuse/src/fuse-resolve.c:706
    #30 0x7f1884ef570e in fuse_resolve_fd /home/pk/workspace/rhs-glusterfs/xlators/mount/fuse/src/fuse-resolve.c:566
    #31 0x7f1884ef5ada in fuse_resolve /home/pk/workspace/rhs-glusterfs/xlators/mount/fuse/src/fuse-resolve.c:643
    #32 0x7f1884ef5bf1 in fuse_resolve_all /home/pk/workspace/rhs-glusterfs/xlators/mount/fuse/src/fuse-resolve.c:679
    #33 0x7f1884ef5daa in fuse_resolve_and_resume /home/pk/workspace/rhs-glusterfs/xlators/mount/fuse/src/fuse-resolve.c:718
    #34 0x7f1884f1a63c in fuse_readv /home/pk/workspace/rhs-glusterfs/xlators/mount/fuse/src/fuse-bridge.c:2281
    #35 0x7f1884f3ec79 in fuse_thread_proc /home/pk/workspace/rhs-glusterfs/xlators/mount/fuse/src/fuse-bridge.c:5071
    #36 0x7f188fbcd6c9 in start_thread (/lib64/libpthread.so.0+0x76c9)
    #37 0x7f188f4a7f7e in clone (/lib64/libc.so.6+0x107f7e)

0x611001aa7940 is located 192 bytes inside of 240-byte region [0x611001aa7880,0x611001aa7970)
freed by thread T4 here:
    #0 0x7f18912f4b00 in free (/lib64/libasan.so.3+0xc6b00)

previously allocated by thread T6 here:
    #0 0x7f18912f5020 in calloc (/lib64/libasan.so.3+0xc7020)
    #1 0x7f1890e7c67b in __gf_calloc /home/pk/workspace/rhs-glusterfs/libglusterfs/src/mem-pool.c:117
    #2 0x7f187f51205a in __shard_inode_ctx_get /home/pk/workspace/rhs-glusterfs/xlators/features/shard/src/shard.c:74
    #3 0x7f187f5123b4 in __shard_inode_ctx_set /home/pk/workspace/rhs-glusterfs/xlators/features/shard/src/shard.c:112
    #4 0x7f187f513631 in shard_inode_ctx_set /home/pk/workspace/rhs-glusterfs/xlators/features/shard/src/shard.c:172
    #5 0x7f187f52deab in shard_link_block_inode /home/pk/workspace/rhs-glusterfs/xlators/features/shard/src/shard.c:1656
    #6 0x7f187f551306 in shard_common_mknod_cbk /home/pk/workspace/rhs-glusterfs/xlators/features/shard/src/shard.c:3246
    #7 0x7f187f87f3ac in dht_newfile_cbk /home/pk/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-common.c:5580
    #8 0x7f187fb83315 in afr_mknod_unwind /home/pk/workspace/rhs-glusterfs/xlators/cluster/afr/src/afr-dir-write.c:553
    #9 0x7f187fb7e65d in __afr_dir_write_cbk /home/pk/workspace/rhs-glusterfs/xlators/cluster/afr/src/afr-dir-write.c:265
    #10 0x7f187fb833ca in afr_mknod_wind_cbk /home/pk/workspace/rhs-glusterfs/xlators/cluster/afr/src/afr-dir-write.c:567
    #11 0x7f187ff2e8da in client3_3_mknod_cbk /home/pk/workspace/rhs-glusterfs/xlators/protocol/client/src/client-rpc-fops.c:240
    #12 0x7f1890b6a3ec in rpc_clnt_handle_reply /home/pk/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-clnt.c:794
    #13 0x7f1890b6af04 in rpc_clnt_notify /home/pk/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-clnt.c:987
    #14 0x7f1890b6172c in rpc_transport_notify /home/pk/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-transport.c:538
    #15 0x7f18824ad582 in socket_event_poll_in /home/pk/workspace/rhs-glusterfs/rpc/rpc-transport/socket/src/socket.c:2306
    #16 0x7f18824ae3ed in socket_event_handler /home/pk/workspace/rhs-glusterfs/rpc/rpc-transport/socket/src/socket.c:2458
    #17 0x7f1890f006e6 in event_dispatch_epoll_handler /home/pk/workspace/rhs-glusterfs/libglusterfs/src/event-epoll.c:572
    #18 0x7f1890f00d9c in event_dispatch_epoll_worker /home/pk/workspace/rhs-glusterfs/libglusterfs/src/event-epoll.c:648
    #19 0x7f188fbcd6c9 in start_thread (/lib64/libpthread.so.0+0x76c9)

Thread T8 created by T6 here:
    #0 0x7f189125f488 in __interceptor_pthread_create (/lib64/libasan.so.3+0x31488)
    #1 0x7f1890e26887 in gf_thread_create /home/pk/workspace/rhs-glusterfs/libglusterfs/src/common-utils.c:3733
    #2 0x7f1884f402fa in notify /home/pk/workspace/rhs-glusterfs/xlators/mount/fuse/src/fuse-bridge.c:5312
    #3 0x7f1890e02b79 in xlator_notify /home/pk/workspace/rhs-glusterfs/libglusterfs/src/xlator.c:549
    #4 0x7f1890fb0814 in default_notify /home/pk/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:3107
    #5 0x7f1890e02b79 in xlator_notify /home/pk/workspace/rhs-glusterfs/libglusterfs/src/xlator.c:549
    #6 0x7f1890fb08af in default_notify /home/pk/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:3113
    #7 0x7f187ec97243 in notify /home/pk/workspace/rhs-glusterfs/xlators/debug/io-stats/src/io-stats.c:4150
    #8 0x7f1890e02b79 in xlator_notify /home/pk/workspace/rhs-glusterfs/libglusterfs/src/xlator.c:549
    #9 0x7f1890fb08af in default_notify /home/pk/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:3113
    #10 0x7f1890e02b79 in xlator_notify /home/pk/workspace/rhs-glusterfs/libglusterfs/src/xlator.c:549
    #11 0x7f1890fb08af in default_notify /home/pk/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:3113
    #12 0x7f1890e02b79 in xlator_notify /home/pk/workspace/rhs-glusterfs/libglusterfs/src/xlator.c:549
    #13 0x7f1890fb08af in default_notify /home/pk/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:3113
    #14 0x7f1890e02b79 in xlator_notify /home/pk/workspace/rhs-glusterfs/libglusterfs/src/xlator.c:549
    #15 0x7f1890fb08af in default_notify /home/pk/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:3113
    #16 0x7f1890e02b79 in xlator_notify /home/pk/workspace/rhs-glusterfs/libglusterfs/src/xlator.c:549
    #17 0x7f1890fb08af in default_notify /home/pk/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:3113
    #18 0x7f187f8b81aa in dht_notify /home/pk/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-common.c:9391
    #19 0x7f1890e02b79 in xlator_notify /home/pk/workspace/rhs-glusterfs/libglusterfs/src/xlator.c:549
    #20 0x7f1890fb08af in default_notify /home/pk/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:3113
    #21 0x7f187fc940c0 in afr_notify /home/pk/workspace/rhs-glusterfs/xlators/cluster/afr/src/afr-common.c:4833
    #22 0x7f187fc9ce0f in notify /home/pk/workspace/rhs-glusterfs/xlators/cluster/afr/src/afr.c:43
    #23 0x7f1890e02b79 in xlator_notify /home/pk/workspace/rhs-glusterfs/libglusterfs/src/xlator.c:549
    #24 0x7f1890fb08af in default_notify /home/pk/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:3113
    #25 0x7f187fef815e in client_notify_dispatch /home/pk/workspace/rhs-glusterfs/xlators/protocol/client/src/client.c:90
    #26 0x7f187fef7fad in client_notify_dispatch_uniq /home/pk/workspace/rhs-glusterfs/xlators/protocol/client/src/client.c:68
    #27 0x7f187ff7d8e1 in client_notify_parents_child_up /home/pk/workspace/rhs-glusterfs/xlators/protocol/client/src/client-handshake.c:137
    #28 0x7f187ff83a9c in client_post_handshake /home/pk/workspace/rhs-glusterfs/xlators/protocol/client/src/client-handshake.c:1059
    #29 0x7f187ff84950 in client_setvolume_cbk /home/pk/workspace/rhs-glusterfs/xlators/protocol/client/src/client-handshake.c:1228
    #30 0x7f1890b6a3ec in rpc_clnt_handle_reply /home/pk/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-clnt.c:794
    #31 0x7f1890b6af04 in rpc_clnt_notify /home/pk/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-clnt.c:987
    #32 0x7f1890b6172c in rpc_transport_notify /home/pk/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-transport.c:538
    #33 0x7f18824ad582 in socket_event_poll_in /home/pk/workspace/rhs-glusterfs/rpc/rpc-transport/socket/src/socket.c:2306
    #34 0x7f18824ae3ed in socket_event_handler /home/pk/workspace/rhs-glusterfs/rpc/rpc-transport/socket/src/socket.c:2458
    #35 0x7f1890f006e6 in event_dispatch_epoll_handler /home/pk/workspace/rhs-glusterfs/libglusterfs/src/event-epoll.c:572
    #36 0x7f1890f00d9c in event_dispatch_epoll_worker /home/pk/workspace/rhs-glusterfs/libglusterfs/src/event-epoll.c:648
    #37 0x7f188fbcd6c9 in start_thread (/lib64/libpthread.so.0+0x76c9)

Thread T6 created by T0 here:
    #0 0x7f189125f488 in __interceptor_pthread_create (/lib64/libasan.so.3+0x31488)
    #1 0x7f1890f010b4 in event_dispatch_epoll /home/pk/workspace/rhs-glusterfs/libglusterfs/src/event-epoll.c:700
    #2 0x7f1890e7a8b6 in event_dispatch /home/pk/workspace/rhs-glusterfs/libglusterfs/src/event.c:124
    #3 0x40fddb in main /home/pk/workspace/rhs-glusterfs/glusterfsd/src/glusterfsd.c:2479
    #4 0x7f188f3c0400 in __libc_start_main (/lib64/libc.so.6+0x20400)

Thread T4 created by T0 here:
    #0 0x7f189125f488 in __interceptor_pthread_create (/lib64/libasan.so.3+0x31488)
    #1 0x7f1890e26887 in gf_thread_create /home/pk/workspace/rhs-glusterfs/libglusterfs/src/common-utils.c:3733
    #2 0x7f1890eaa398 in syncenv_new /home/pk/workspace/rhs-glusterfs/libglusterfs/src/syncop.c:827
    #3 0x40fc7b in main /home/pk/workspace/rhs-glusterfs/glusterfsd/src/glusterfsd.c:2461
    #4 0x7f188f3c0400 in __libc_start_main (/lib64/libc.so.6+0x20400)

SUMMARY: AddressSanitizer: heap-use-after-free ../../../../libglusterfs/src/list.h:76 in list_del
Shadow bytes around the buggy address:
  0x0c228034ced0: fd fd fd fd fd fd fd fd fd fa fa fa fa fa fa fa
  0x0c228034cee0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
  0x0c228034cef0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c228034cf00: fd fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c228034cf10: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
=>0x0c228034cf20: fd fd fd fd fd fd fd fd[fd]fd fd fd fd fd fa fa
  0x0c228034cf30: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
  0x0c228034cf40: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c228034cf50: fd fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c228034cf60: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c228034cf70: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
Comment 3 SATHEESARAN 2017-06-23 09:20:18 EDT
Below is the snip from fuse mount logs:

<snip>

[2017-06-23 12:43:03.004141] W [fuse-bridge.c:2228:fuse_readv_cbk] 0-glusterfs-fuse: 1836490: READ => -1 gfid=0a146abd-018f-4ee6-a652-d9d2fc2cc9d9 fd=0x7f994807fa70 (Stale file handle)
[2017-06-23 12:43:03.004233] W [fuse-bridge.c:2228:fuse_readv_cbk] 0-glusterfs-fuse: 1836491: READ => -1 gfid=0a146abd-018f-4ee6-a652-d9d2fc2cc9d9 fd=0x7f994807fa70 (Stale file handle)
[2017-06-23 12:43:03.004420] W [fuse-bridge.c:2228:fuse_readv_cbk] 0-glusterfs-fuse: 1836494: READ => -1 gfid=0a146abd-018f-4ee6-a652-d9d2fc2cc9d9 fd=0x7f994807fa70 (Stale file handle)
[2017-06-23 12:43:03.004755] W [fuse-bridge.c:2228:fuse_readv_cbk] 0-glusterfs-fuse: 1836495: READ => -1 gfid=0a146abd-018f-4ee6-a652-d9d2fc2cc9d9 fd=0x7f994807fa70 (Stale file handle)
[2017-06-23 12:43:04.694791] E [mem-pool.c:315:__gf_free] (-->/lib64/libglusterfs.so.0(cb_add_entry_buffer+0x21) [0x7f996b76ecf1] -->/lib64/libglusterfs.so.0(__cb_add_entry_buffer+0xed) [0x7f996b76eb7d] -->/lib6
4/libglusterfs.so.0(__gf_free+0x10c) [0x7f996b73fa5c] ) 0-: Assertion failed: GF_MEM_TRAILER_MAGIC == *(uint32_t *)((char *)free_ptr + header->size)
[2017-06-23 12:43:04.823802] E [mem-pool.c:315:__gf_free] (-->/lib64/libglusterfs.so.0(cb_add_entry_buffer+0x21) [0x7f996b76ecf1] -->/lib64/libglusterfs.so.0(__cb_add_entry_buffer+0xed) [0x7f996b76eb7d] -->/lib64/libglusterfs.so.0(__gf_free+0x10c) [0x7f996b73fa5c] ) 0-: Assertion failed: GF_MEM_TRAILER_MAGIC == *(uint32_t *)((char *)free_ptr + header->size)
[2017-06-23 12:43:05.991798] E [mem-pool.c:315:__gf_free] (-->/lib64/libglusterfs.so.0(cb_add_entry_buffer+0x21) [0x7f996b76ecf1] -->/lib64/libglusterfs.so.0(__cb_add_entry_buffer+0xed) [0x7f996b76eb7d] -->/lib64/libglusterfs.so.0(__gf_free+0x10c) [0x7f996b73fa5c] ) 0-: Assertion failed: GF_MEM_TRAILER_MAGIC == *(uint32_t *)((char *)free_ptr + header->size)
[2017-06-23 12:43:06.255062] E [mem-pool.c:315:__gf_free] (-->/lib64/libglusterfs.so.0(cb_add_entry_buffer+0x21) [0x7f996b76ecf1] -->/lib64/libglusterfs.so.0(__cb_add_entry_buffer+0xed) [0x7f996b76eb7d] -->/lib64/libglusterfs.so.0(__gf_free+0x10c) [0x7f996b73fa5c] ) 0-: Assertion failed: GF_MEM_TRAILER_MAGIC == *(uint32_t *)((char *)free_ptr + header->size)
[2017-06-23 12:43:06.631033] E [mem-pool.c:306:__gf_free] (-->/usr/lib64/glusterfs/3.8.4/xlator/protocol/client.so(+0x18ac6) [0x7f995de6fac6] -->/usr/lib64/glusterfs/3.8.4/xlator/protocol/client.so(+0x36f1e) [0x7f995de8df1e] -->/lib64/libglusterfs.so.0(__gf_free+0xac) [0x7f996b73f9fc] ) 0-: Assertion failed: GF_MEM_HEADER_MAGIC == header->magic
[2017-06-23 12:42:01.634939] E [MSGID: 109040] [dht-helper.c:1197:dht_migration_complete_check_task] 0-imgstore-dht: 29cc8354-626c-4471-a662-1203290814a1: failed to lookup the file on imgstore-dht [Stale file handle]
The message "E [MSGID: 109040] [dht-helper.c:1197:dht_migration_complete_check_task] 0-imgstore-dht: c163c260-5144-4627-a8df-2f9c9b36dc03: failed to lookup the file on imgstore-dht [Stale file handle]" repeated 3 times between [2017-06-23 12:43:03.004085] and [2017-06-23 12:43:03.004729]
pending frames:
frame : type(1) op(STAT)
frame : type(1) op(FLUSH)
frame : type(0) op(0)
patchset: git://git.gluster.com/glusterfs.git
signal received: 6
time of crash: 
2017-06-23 12:43:06
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.8.4
/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xc2)[0x7f996b719482]
/lib64/libglusterfs.so.0(gf_print_trace+0x324)[0x7f996b722fb4]
/lib64/libc.so.6(+0x35250)[0x7f9969dfb250]
/lib64/libc.so.6(gsignal+0x37)[0x7f9969dfb1d7]
/lib64/libc.so.6(abort+0x148)[0x7f9969dfc8c8]
/lib64/libc.so.6(+0x74f07)[0x7f9969e3af07]
/lib64/libc.so.6(+0x7c503)[0x7f9969e42503]
/usr/lib64/glusterfs/3.8.4/xlator/protocol/client.so(+0x36f1e)[0x7f995de8df1e]
/usr/lib64/glusterfs/3.8.4/xlator/protocol/client.so(+0x18ac6)[0x7f995de6fac6]
/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)[0x7f996b4e2840]
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1e7)[0x7f996b4e2b27]
/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7f996b4de9e3]
/usr/lib64/glusterfs/3.8.4/rpc-transport/socket.so(+0x73d6)[0x7f99601583d6]
/usr/lib64/glusterfs/3.8.4/rpc-transport/socket.so(+0x997c)[0x7f996015a97c]
/lib64/libglusterfs.so.0(+0x847b6)[0x7f996b7737b6]
/lib64/libpthread.so.0(+0x7dc5)[0x7f996a578dc5]
/lib64/libc.so.6(clone+0x6d)[0x7f9969ebd76d]

</snip>
Comment 4 SATHEESARAN 2017-06-23 09:27:00 EDT
Created attachment 1291048 [details]
coredump
Comment 5 SATHEESARAN 2017-06-23 09:27:35 EDT
Created attachment 1291049 [details]
fuse mount log
Comment 6 Pranith Kumar K 2017-06-23 11:06:32 EDT
RC:
1) When the shard is first added LRU we do inode_lookup
2) when we remove the shard from LRU we do inode_forget(0)
3) Before the final inode_unref() happens on the inode, if any operation on it comes shard_common_resolve_shards() will try to add the inode back to the LRU without inode_lookup(). Once this happens when the final unref happens on the inode, we end up with an inode that is freed but it is there in LRU list, and this causes different kinds of memory corruption bugs.

I tried the following fix and ran the workload for 30 minutes and it didn't crash, but I found another bug where cat sometimes gives ENOENT. It shouldn't because the workload never deletes the file once created.

The following change fixes it.


diff --git a/xlators/features/shard/src/shard.c b/xlators/features/shard/src/shard.c
index 141c42f..7cf44e4 100644
--- a/xlators/features/shard/src/shard.c
+++ b/xlators/features/shard/src/shard.c
@@ -504,6 +504,7 @@ __shard_update_shards_inode_list (inode_t *linked_inode, xlator_t *this,
                  */
                         gf_uuid_copy (ctx->base_gfid, base_inode->gfid);
                         ctx->block_num = block_num;
+                        inode_lookup (linked_inode);
                         list_add_tail (&ctx->ilist, &priv->ilist_head);
                         priv->inode_count++;
                 } else {
@@ -1656,7 +1657,6 @@ shard_link_block_inode (shard_local_t *local, int block_num, inode_t *inode,
         shard_inode_ctx_set (inode, this, buf, 0, SHARD_LOOKUP_MASK);
         linked_inode = inode_link (inode, priv->dot_shard_inode, block_bname,
                                    buf);
-        inode_lookup (linked_inode);
         list_index = block_num - local->first_block;
 
         /* Defer unref'ing the inodes until write is complete. These inodes are
Comment 7 Pranith Kumar K 2017-06-23 11:12:39 EDT
I found and fixed this through code reading, so will do more tests and discuss more with Krutika once she is back. I am 99% sure though.

@sas, Sorry for putting pressure on you to give me the bz id quickly :-(. I think we can improve the bz description by giving the exact commands for volume creation and the two commands you gave in the email :-). 

Again! Nice bug.

Pranith
Comment 8 SATHEESARAN 2017-06-23 12:03:18 EDT
Thanks Pranith. Here are more details with respect to the steps to hit this bug:

1. Create a replica 3 volume
# gluster volume create imgstore replica 3 host1:/brick1/b1 host2:/brick1/b1 host3:/brick1/b1

2. Optimize the volume for virt store
( you can choose to ignore this step, originally I have created this volume for storing VM disk images, and hence this step )
# gluster volume set imgstore group virt
# gluster volume set imgstore storage.owner-uid 107
# gluster volume set imgstore storage.owner-gid 107

3. Start the volume
# gluster volume start imgstore

4. Fuse mount the volume on a client
#  mount.glusterfs localhost:/imgstore /mnt/test

5. Open two terminal window to this client machine

6. Run a 'dd' on a file from one terminal window
# while true; do dd if=/dev/urandom of=/mnt/test/file1 bs=128k count=100; done

7. From a another terminal to the same machine, read from the same file
# while true; do cat /mnt/test/file1 > /dev/null; done

Note: step6 and step7 are done from the same mount, but different ssh session to the client machine

Observation:
I ran the above test 4 times.
Test1: Crash was observed within 2 minutes
Test2: Crash was observed within ~5 minutes
Test3: Crash was observed after 10 minutes
Test4: Crash was observed within 5 minutes
Comment 9 Krutika Dhananjay 2017-06-24 09:20:59 EDT
So the IO was being done directly from the mount point and not from inside of any VM?
(Not complaining, a bug is a bug. :) Was just curious as to how we didn't hit this bug all this time.) 

-Krutika
Comment 10 Pranith Kumar K 2017-06-24 09:28:32 EDT
(In reply to Krutika Dhananjay from comment #9)
> So the IO was being done directly from the mount point and not from inside
> of any VM?
> (Not complaining, a bug is a bug. :) Was just curious as to how we didn't
> hit this bug all this time.) 
> 
> -Krutika

That is correct.
Comment 11 SATHEESARAN 2017-06-27 04:48:37 EDT
(In reply to Krutika Dhananjay from comment #9)
> So the IO was being done directly from the mount point and not from inside
> of any VM?
> (Not complaining, a bug is a bug. :) Was just curious as to how we didn't
> hit this bug all this time.) 
> 
> -Krutika

To throw some background on 'motivation behind this usecase' with this bug.

1. It was **not** a VM usecase at all.
2. To reproduce the issue (BZ 1384983) where split-brain observed with arbiter volume, I wrote the automated script. With VM workload, we could rarely hit ( 3 times out of all the testing ) this issue and so I have resorted to the method of running the automated script, that brings down one of the brick alternatively and runs some I/O ( do a 'dd', followed by 'cat' on the file )  directly from mount.

For checking the correctness of the script, I ran the same script on replica 3 volume, I observed the crash.
Comment 13 Pranith Kumar K 2017-06-28 00:34:32 EDT
(In reply to Pranith Kumar K from comment #7)
> I found and fixed this through code reading, so will do more tests and
> discuss more with Krutika once she is back. I am 99% sure though.
> 
> @sas, Sorry for putting pressure on you to give me the bz id quickly :-(. I
> think we can improve the bz description by giving the exact commands for
> volume creation and the two commands you gave in the email :-). 
> 
> Again! Nice bug.
> 
> Pranith

I am finding more cases. I also reduced the dd count to be 16 to recreate it faster. RC is fine, but I think we need more changes for the fix. Will send out the final fix by EOD
Comment 15 Pranith Kumar K 2017-06-28 15:39:38 EDT
(In reply to Pranith Kumar K from comment #13)
> (In reply to Pranith Kumar K from comment #7)
> > I found and fixed this through code reading, so will do more tests and
> > discuss more with Krutika once she is back. I am 99% sure though.
> > 
> > @sas, Sorry for putting pressure on you to give me the bz id quickly :-(. I
> > think we can improve the bz description by giving the exact commands for
> > volume creation and the two commands you gave in the email :-). 
> > 
> > Again! Nice bug.
> > 
> > Pranith
> 
> I am finding more cases. I also reduced the dd count to be 16 to recreate it
> faster. RC is fine, but I think we need more changes for the fix. Will send
> out the final fix by EOD

I added logs, this confirms that, after removing from LRU, the inode is added again to the list.

[2017-06-28 19:01:45.119877] I [shard.c:542:__shard_update_shards_inode_list] 0-r3-shard: 0e2f4fdd-4f7b-4d0e-8027-4247560507b0 is moved to top
...
0-r3-shard: 0e2f4fdd-4f7b-4d0e-8027-4247560507b0 is removed to list
[2017-06-28 19:01:45.136590] I [shard.c:509:__shard_update_shards_inode_list] 0-r3-shard: 0e2f4fdd-4f7b-4d0e-8027-4247560507b0 is added to list
[2017-06-28 19:01:45.144234] E [MSGID: 109040] [dht-helper.c:1197:dht_migration_complete_check_task] 0-r3-dht: 0e2f4fdd-4f7b-4d0e-8027-4247560507b0: failed to lookup the file on r3-dht [Stale file handle]
[2017-06-28 19:01:45.145572] I [shard.c:5030:shard_forget] 0-r3-shard: 0e2f4fdd-4f7b-4d0e-8027-4247560507b0 is freed

I changed the patch as well to fix this
Comment 16 Pranith Kumar K 2017-06-28 16:13:58 EDT
https://review.gluster.org/17644
Comment 17 Pranith Kumar K 2017-06-30 02:15:14 EDT
https://code.engineering.redhat.com/gerrit/#/c/110565/
Comment 19 SATHEESARAN 2017-07-10 09:45:19 EDT
Tested with glusterfs-3.8.4-32.el7rhgs with the steps listed in comment8

There are no more crashes observed
Comment 21 errata-xmlrpc 2017-09-21 01:02:13 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774

Note You need to log in before you can comment on or make changes to this bug.