Bug 1487033 - rpc: client_t and related objects leaked due to incorrect ref counts
Summary: rpc: client_t and related objects leaked due to incorrect ref counts
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: rpc
Version: 3.12
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Milind Changire
QA Contact:
URL:
Whiteboard:
Depends On: 1481600
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-08-31 05:59 UTC by Milind Changire
Modified: 2017-09-14 07:42 UTC (History)
3 users (show)

Fixed In Version: glusterfs-glusterfs-3.12.1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1481600
Environment:
Last Closed: 2017-09-14 07:42:56 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Milind Changire 2017-08-31 05:59:21 UTC
+++ This bug was initially created as a clone of Bug #1481600 +++

Description of problem:
Problem:
1. incorrectly placed gf_client_get() in rpc_request_init()
   gf_client_ref() in rpc_request_init() should be moved to 
   get_frame_from_request()

2. incorrect ref handling in server_rpc_notify() and grace_time_handler()
   2.1 last ref count on client_t should be dropped in 
       RPCSVC_EVENT_TRANSPORT_DESTROY only for non-grace-time-handling case
   2.2 ref should be taken on client_t before being delegated to 
       grace_tim_handler()
   2.3 ref should be dropped from client_t in server_setvolume() when the 
       grace_time_handler() is successfully canceled for a re-connected client


Version-Release number of selected component (if applicable):


How reproducible:
always (with valgrind)

--- Additional comment from Worker Ant on 2017-08-16 19:38:37 IST ---

REVIEW: https://review.gluster.org/17982 (rpc: destroy transport after client_t) posted (#9) for review on master by Milind Changire (mchangir)

--- Additional comment from Worker Ant on 2017-08-22 15:48:15 IST ---

REVIEW: https://review.gluster.org/17982 (rpc: destroy transport after client_t) posted (#10) for review on master by Milind Changire (mchangir)

--- Additional comment from Worker Ant on 2017-08-22 15:52:15 IST ---

REVIEW: https://review.gluster.org/17982 (rpc: destroy transport after client_t) posted (#11) for review on master by Milind Changire (mchangir)

--- Additional comment from Worker Ant on 2017-08-22 15:53:30 IST ---

REVIEW: https://review.gluster.org/17982 (rpc: destroy transport after client_t) posted (#12) for review on master by Milind Changire (mchangir)

--- Additional comment from Worker Ant on 2017-08-30 08:53:20 IST ---

REVIEW: https://review.gluster.org/17982 (rpc: destroy transport after client_t) posted (#13) for review on master by Milind Changire (mchangir)

--- Additional comment from Worker Ant on 2017-08-30 08:56:32 IST ---

REVIEW: https://review.gluster.org/17982 (rpc: destroy transport after client_t) posted (#14) for review on master by Milind Changire (mchangir)

--- Additional comment from Worker Ant on 2017-08-30 09:01:12 IST ---

REVIEW: https://review.gluster.org/17982 (rpc: destroy transport after client_t) posted (#15) for review on master by Milind Changire (mchangir)

--- Additional comment from Worker Ant on 2017-08-30 11:22:37 IST ---

REVIEW: https://review.gluster.org/17982 (rpc: destroy transport after client_t) posted (#16) for review on master by Milind Changire (mchangir)

--- Additional comment from Worker Ant on 2017-08-30 11:26:02 IST ---

REVIEW: https://review.gluster.org/17982 (rpc: destroy transport after client_t) posted (#17) for review on master by Milind Changire (mchangir)

--- Additional comment from Worker Ant on 2017-08-31 09:15:13 IST ---

COMMIT: https://review.gluster.org/17982 committed in master by Raghavendra G (rgowdapp) 
------
commit 24b95089a18a6a40e7703cb344e92025d67f3086
Author: Milind Changire <mchangir>
Date:   Wed Aug 30 11:25:29 2017 +0530

    rpc: destroy transport after client_t
    
    Problem:
    1. Ref counting increment on the client_t object is done in
       rpcsvc_request_init() which is incorrect.
    2. Ref not taken when delegating to grace_time_handler()
    
    Solution:
    1. Only fop requests which require processing down the graph via
       stack 'frames' now ref count the request in get_frame_from_request()
    2. Take ref on client_t object in server_rpc_notify() but avoid
       dropping in RPCSVC_EVENT_TRANSPORT_DESRTROY. Drop the ref
       unconditionally when exiting out of grace_time_handler().
       Also, avoid dropping ref on client_t in
       RPCSVC_EVENT_TRANSPORT_DESTROY when ref mangement as been
       delegated to grace_time_handler()
    
    Change-Id: Ic16246bebc7ea4490545b26564658f4b081675e4
    BUG: 1481600
    Reported-by: Raghavendra G <rgowdapp>
    Signed-off-by: Milind Changire <mchangir>
    Reviewed-on: https://review.gluster.org/17982
    Tested-by: Raghavendra G <rgowdapp>
    Reviewed-by: Raghavendra G <rgowdapp>
    CentOS-regression: Gluster Build System <jenkins.org>
    Smoke: Gluster Build System <jenkins.org>

Comment 1 Worker Ant 2017-08-31 06:10:02 UTC
REVIEW: https://review.gluster.org/18156 (rpc: destroy transport after client_t) posted (#1) for review on release-3.12 by Milind Changire (mchangir)

Comment 2 Worker Ant 2017-09-07 06:49:56 UTC
COMMIT: https://review.gluster.org/18156 committed in release-3.12 by jiffin tony Thottan (jthottan) 
------
commit e0335c32de133aafd88b888a0c20f4eb88bb9845
Author: Milind Changire <mchangir>
Date:   Thu Aug 31 11:37:32 2017 +0530

    rpc: destroy transport after client_t
    
    Problem:
    1. Ref counting increment on the client_t object is done in
       rpcsvc_request_init() which is incorrect.
    2. Ref not taken when delegating to grace_time_handler()
    
    Solution:
    1. Only fop requests which require processing down the graph via
       stack 'frames' now ref count the request in get_frame_from_request()
    2. Take ref on client_t object in server_rpc_notify() but avoid
       dropping in RPCSVC_EVENT_TRANSPORT_DESRTROY. Drop the ref
       unconditionally when exiting out of grace_time_handler().
       Also, avoid dropping ref on client_t in
       RPCSVC_EVENT_TRANSPORT_DESTROY when ref mangement as been
       delegated to grace_time_handler()
    
    mainline:
    > BUG: 1481600
    > Reported-by: Raghavendra G <rgowdapp>
    > Signed-off-by: Milind Changire <mchangir>
    > Reviewed-on: https://review.gluster.org/17982
    > Tested-by: Raghavendra G <rgowdapp>
    > Reviewed-by: Raghavendra G <rgowdapp>
    > CentOS-regression: Gluster Build System <jenkins.org>
    > Smoke: Gluster Build System <jenkins.org>
    (cherry picked from commit 24b95089a18a6a40e7703cb344e92025d67f3086)
    
    Change-Id: Ic16246bebc7ea4490545b26564658f4b081675e4
    BUG: 1487033
    Reported-by: Raghavendra G <rgowdapp>
    Signed-off-by: Milind Changire <mchangir>
    Reviewed-on: https://review.gluster.org/18156
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: Raghavendra G <rgowdapp>
    Tested-by: Raghavendra G <rgowdapp>
    CentOS-regression: Gluster Build System <jenkins.org>

Comment 3 Jiffin 2017-09-14 07:42:56 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.12.1, please open a new bug report.

glusterfs-glusterfs-3.12.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-September/032441.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.