| Summary: | release_inode_locks on xport disconnect crashes | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Pranith Kumar K <pkarampu> |
| Component: | locks | Assignee: | Pranith Kumar K <pkarampu> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Raghavendra Bhat <rabhat> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | pre-release | CC: | amarts, gluster-bugs |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-3.4.0 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2013-07-24 17:13:36 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | glusterfs-3.3.0qa42 | Category: | --- |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Bug Depends On: | |||
| Bug Blocks: | 817967 | ||
CHANGE: http://review.gluster.com/2791 (features/locks: Don't access free'd memory) merged in master by Vijay Bellur (vijay) Checked with glusterfs-3.3.0qa42 and this crash is not seen now. |
Description of problem: The brick crashes with the following bt #0 0x00007fc66f34794b in lkowner_unparse (lkowner=0x7fc663aaabf0, buf=0x7fc66569b780 "ffff1c272a7f0000", buf_len=2176) at lkowner.h:47 i = 0 j = 0 #1 0x00007fc66f34b92f in lkowner_utoa (lkowner=0x7fc663aaabf0) at common-utils.c:1826 lkowner_buffer = 0x7fc66569b780 "ffff1c272a7f0000" #2 0x00007fc6692c69f0 in release_inode_locks_of_transport (this=0x7fc66a3fd660, dom=0x7fc663a96fa0, inode=0x7fc667237a8c, trans=0x7fc664642f80) at inodelk.c:433 tmp = 0x7fc663a96fd8 l = 0x7fc663aaa750 pinode = 0x7fc663a8cf60 granted = {next = 0x7fc667dddc50, prev = 0x7fc667dddc50} released = {next = 0x7fc667dddc40, prev = 0x7fc667dddc40} path = 0x7fc66656efc0 "<gfid:21d4230c-5611-49e0-90ee-f6f92c5d2aa9>" file = 0x7fc66656efc0 "<gfid:21d4230c-5611-49e0-90ee-f6f92c5d2aa9>" __FUNCTION__ = "release_inode_locks_of_transport" #3 0x00007fc6692c73b2 in pl_common_inodelk (frame=0x7fc66dcb481c, this=0x7fc66a3fd660, volume=0x7fc667fd5fe0 "repl1-replicate-0", inode=0x7fc667237a8c, cmd=6, flock=0x7fc66cb383a4, loc=0x7fc66cb3835c, fd=0x0) at inodelk.c:592 op_ret = -1 op_errno = 0 ret = -1 can_block = 0 client_pid = 0 transport = 0x7fc664642f80 pinode = 0x7fc663a8cf60 reqlock = 0x0 dom = 0x7fc663a96fa0 __FUNCTION__ = "pl_common_inodelk" #4 0x00007fc6692c784a in pl_inodelk (frame=0x7fc66dcb481c, this=0x7fc66a3fd660, volume=0x7fc667fd5fe0 "repl1-replicate-0", loc=0x7fc66cb3835c, cmd=6, flock=0x7fc66cb383a4) at inodelk.c:663 No locals. #5 0x00007fc6690aa9ba in iot_inodelk_wrapper (frame=0x7fc66dcb4e28, this=0x7fc66a417660, volume=0x7fc667fd5fe0 "repl1-replicate-0", loc=0x7fc66cb3835c, cmd=6, lock=0x7fc66cb383a4) at io-threads.c:1987 _new = 0x7fc66dcb481c old_THIS = 0x7fc66a417660 tmp_cbk = 0x7fc6690aa615 <iot_inodelk_cbk> __FUNCTION__ = "iot_inodelk_wrapper" #6 0x00007fc66f35af83 in call_resume_wind (stub=0x7fc66cb3831c) at call-stub.c:2419 __FUNCTION__ = "call_resume_wind" #7 0x00007fc66f3621b9 in call_resume (stub=0x7fc66cb3831c) at call-stub.c:3938 old_THIS = 0x7fc66a417660 __FUNCTION__ = "call_resume" This happens because the 'l' it accesses in gf_log is already free'd. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: