Bug 803675 - [eb8a9aae19755bc21afe2d8ed4893b788c4e84ff] OOM crash in glusterfsd during graph change
[eb8a9aae19755bc21afe2d8ed4893b788c4e84ff] OOM crash in glusterfsd during gra...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: core (Show other bugs)
mainline
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Raghavendra G
:
Depends On:
Blocks: 817967
  Show dependency treegraph
 
Reported: 2012-03-15 08:04 EDT by Anush Shetty
Modified: 2013-07-24 13:51 EDT (History)
1 user (show)

See Also:
Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-24 13:51:39 EDT
Type: ---
Regression: ---
Mount Type: fuse
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Anush Shetty 2012-03-15 08:04:56 EDT
Description of problem: While doing graph change in a loop, the glusterfsd process got killed due to out of memory.

It was a single export volume.


Version-Release number of selected component (if applicable): Upstream


How reproducible: Consistently


Steps to Reproduce:
1. while true; do  echo 'abc' > /mnt/gluster/dot; cat /mnt/gluster/dot > /dev/null;done
2. while true; do gluster volume set test2 performance.read-ahead off; sleep 1; gluster volume set test2 performance.read-ahead on; sleep 1; done
3.
  
Actual results: The glusterfsd got killed.


Expected results: It shouldn't crash


Additional info:

#0  0x00007fad2cebe2e8 in __dentry_search_arbit (inode=0x2100000dfe67c) at inode.c:986
        dentry = 0x0
        trav = 0x0
#1  0x00007fad2cebe62d in __inode_path (inode=0x7fad2c831648, name=0x0, bufp=0x7fad295f8d10) at inode.c:1058
        table = 0x7fad2c831638
        itrav = 0x2100000dfe67c
        trav = 0x7fad2c8319e8
        i = 84
        size = 0
        ret = 0
        len = 0
        buf = 0x0
        __FUNCTION__ = "__inode_path"
#2  0x00007fad2cebe993 in inode_path (inode=0x7fad2c831648, name=0x0, bufp=0x7fad295f8d10) at inode.c:1150
        table = 0x7fad2c831638
        ret = -1
#3  0x00007fad27f195fb in do_fd_cleanup (this=0x7adde0, conn=0x8a4ed0, frame=0x7fad2b4e0098, fdentries=0x8a7e80, fd_count=128)
    at server-helpers.c:470
        fd = 0x8a7930
        i = 47
        ret = 4
        tmp_frame = 0x7fad2b4dc79c
        bound_xl = 0x7acd00
        path = 0x8943e0 "pE\212"
        __FUNCTION__ = "do_fd_cleanup"
#4  0x00007fad27f19c1d in do_connection_cleanup (this=0x7adde0, conn=0x8a4ed0, ltable=0x8a4ff0, fdentries=0x8a7e80, fd_count=128)
    at server-helpers.c:525
        ret = 0
        saved_ret = 0
        frame = 0x7fad2b4e0098
        state = 0x0
        __FUNCTION__ = "do_connection_cleanup"
#5  0x00007fad27f19e05 in server_connection_cleanup (this=0x7adde0, conn=0x8a4ed0) at server-helpers.c:568
        ltable = 0x8a4ff0
        fdentries = 0x8a7e80
        fd_count = 128
        ret = 0
        __FUNCTION__ = "server_connection_cleanup"
#6  0x00007fad27f11c68 in grace_time_handler (data=0x8a4ed0) at server.c:61
        conn = 0x8a4ed0
        this = 0x7adde0
        cancelled = _gf_true
        detached = _gf_true
        __FUNCTION__ = "grace_time_handler"
#7  0x00007fad2cebba89 in gf_timer_proc (ctx=0x75d010) at timer.c:177
        at = 1331811648557818
        need_cbk = 1 '\001'
        now = 1331811648627477
        now_tv = {tv_sec = 1331811648, tv_usec = 627477}
        event = 0x8a3330
        reg = 0x7a1b30
        sleepts = {tv_sec = 1, tv_nsec = 0}
        __FUNCTION__ = "gf_timer_proc"
#8  0x00007fad2c83defc in start_thread (arg=0x7fad295f9700) at pthread_create.c:304
        __res = <optimized out>
        pd = 0x7fad295f9700
        now = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, -1244432725517741588, 140381752949792, 140381700200896, 0, 3, 1288224899760181740, 
                1288234003482071532}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, 
              canceltype = 0}}}
        not_first_call = 0
        robust = <optimized out>
        pagesize_m1 = <optimized out>
        sp = <optimized out>
        freesize = <optimized out>
        __PRETTY_FUNCTION__ = "start_thread"
#9  0x00007fad2c57889d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
Comment 1 Anand Avati 2012-03-18 02:39:13 EDT
CHANGE: http://review.gluster.com/2954 (protocol/server: memory leak fixes.) merged in master by Anand Avati (avati@redhat.com)
Comment 2 Anand Avati 2012-03-21 01:37:25 EDT
CHANGE: http://review.gluster.com/2987 (protocol/client: memory leak fixes.) merged in master by Vijay Bellur (vijay@gluster.com)
Comment 3 Anush Shetty 2012-04-16 05:11:55 EDT
Verified with 3.3.0qa34

Note You need to log in before you can comment on or make changes to this bug.