Bug 1339226 - gfapi: set mem_acct for the variables created for upcall
Summary: gfapi: set mem_acct for the variables created for upcall
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: libgfapi
Version: 3.7.11
Hardware: All
OS: All
unspecified
urgent
Target Milestone: ---
Assignee: Soumya Koduri
QA Contact: Sudhir D
URL:
Whiteboard:
Depends On: 1339214 1339228
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-24 12:28 UTC by Soumya Koduri
Modified: 2016-06-28 12:18 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.7.12
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1339214
Environment:
Last Closed: 2016-06-28 12:18:56 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Soumya Koduri 2016-05-24 12:28:11 UTC
+++ This bug was initially created as a clone of Bug #1339214 +++

Description of problem:

In 'glfs_h_poll_cache_invalidation', we create a variable used 'up_inode_arg' using calloc (which will not set any mem_acct info for that variable). But in case of any errors during processing, we free it using 'GF_FREE' which makes the process crash with ESEGV which trying to access mem_acct variables.

Version-Release number of selected component (if applicable):


How reproducible:
always

Steps to Reproduce:
1. Create a 4 node ganesha cluster.
2. Create a volume and enable ganesha on it.
3. Mount the volume using vers=3 or 4 and create nested directories on the mount point.

4. Add bricks to the volume.

gluster volume add-brick newvolume replica 2   

5. start the rebalance process:

gluster v rebalance newvolume start force

6. Observe that while rebalance is in progress, ganesha process on the mounted node gets killed with seg fault error:


Actual results:
nfs-ganesha process crashes with ESEGV

Expected results:
nfs-ganesha process shouldn't crash.

Additional info:
This issue is originally reported in bug1339208

--- Additional comment from Vijay Bellur on 2016-05-24 08:25:36 EDT ---

REVIEW: http://review.gluster.org/14521 (gfapi/upcall: Use GF_CALLOC while allocating variables) posted (#1) for review on master by soumya k (skoduri)

Comment 1 Vijay Bellur 2016-05-24 12:29:25 UTC
REVIEW: http://review.gluster.org/14522 (gfapi/upcall: Use GF_CALLOC while allocating variables) posted (#1) for review on release-3.7 by soumya k (skoduri)

Comment 2 Vijay Bellur 2016-05-24 15:15:44 UTC
REVIEW: http://review.gluster.org/14522 (gfapi/upcall: Use GF_CALLOC while allocating variables) posted (#2) for review on release-3.7 by Niels de Vos (ndevos)

Comment 3 Vijay Bellur 2016-05-25 11:39:35 UTC
COMMIT: http://review.gluster.org/14522 committed in release-3.7 by Kaleb KEITHLEY (kkeithle) 
------
commit ecf3241eb51fbf5264594c65c6bdb7edac31b526
Author: Soumya Koduri <skoduri>
Date:   Tue May 24 17:42:06 2016 +0530

    gfapi/upcall: Use GF_CALLOC while allocating variables
    
    In 'glfs_h_poll_cache_invalidation', use GF_CALLOC to allocate
    'up_inode_arg' to set memory accounting which is used/referred when
    freeing the same variable in case of any erros.
    
    This is backport of below mainline fix -
             http://review.gluster.org/14521
    
    Change-Id: I365e114fa6d7abb292dacb6fc702128d046df8f8
    BUG: 1339226
    Signed-off-by: Soumya Koduri <skoduri>
    Reviewed-on: http://review.gluster.org/14522
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Reviewed-by: Niels de Vos <ndevos>
    Smoke: Gluster Build System <jenkins.com>
    CentOS-regression: Gluster Build System <jenkins.com>

Comment 4 Kaushal 2016-06-28 12:18:56 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.12, please open a new bug report.

glusterfs-3.7.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-devel/2016-June/049918.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.