Bug 1339228

Summary: gfapi: set mem_acct for the variables created for upcall
Product: [Community] GlusterFS Reporter: Soumya Koduri <skoduri>
Component: libgfapiAssignee: Soumya Koduri <skoduri>
Status: CLOSED CURRENTRELEASE QA Contact: Sudhir D <sdharane>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 3.8.0CC: bugs, sdharane
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: glusterfs-3.8rc2 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1339214 Environment:
Last Closed: 2016-06-16 14:08:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1339214    
Bug Blocks: 1339226    

Description Soumya Koduri 2016-05-24 12:29:20 UTC
+++ This bug was initially created as a clone of Bug #1339214 +++

Description of problem:

In 'glfs_h_poll_cache_invalidation', we create a variable used 'up_inode_arg' using calloc (which will not set any mem_acct info for that variable). But in case of any errors during processing, we free it using 'GF_FREE' which makes the process crash with ESEGV which trying to access mem_acct variables.

Version-Release number of selected component (if applicable):


How reproducible:
always

Steps to Reproduce:
1. Create a 4 node ganesha cluster.
2. Create a volume and enable ganesha on it.
3. Mount the volume using vers=3 or 4 and create nested directories on the mount point.

4. Add bricks to the volume.

gluster volume add-brick newvolume replica 2   

5. start the rebalance process:

gluster v rebalance newvolume start force

6. Observe that while rebalance is in progress, ganesha process on the mounted node gets killed with seg fault error:


Actual results:
nfs-ganesha process crashes with ESEGV

Expected results:
nfs-ganesha process shouldn't crash.

Additional info:
This issue is originally reported in bug1339208

--- Additional comment from Vijay Bellur on 2016-05-24 08:25:36 EDT ---

REVIEW: http://review.gluster.org/14521 (gfapi/upcall: Use GF_CALLOC while allocating variables) posted (#1) for review on master by soumya k (skoduri)

Comment 1 Vijay Bellur 2016-05-24 12:33:11 UTC
REVIEW: http://review.gluster.org/14523 (gfapi/upcall: Use GF_CALLOC while allocating variables) posted (#1) for review on release-3.8 by soumya k (skoduri)

Comment 2 Vijay Bellur 2016-05-24 15:14:55 UTC
REVIEW: http://review.gluster.org/14523 (gfapi/upcall: Use GF_CALLOC while allocating variables) posted (#2) for review on release-3.8 by Niels de Vos (ndevos)

Comment 3 Vijay Bellur 2016-05-24 18:20:12 UTC
COMMIT: http://review.gluster.org/14523 committed in release-3.8 by Niels de Vos (ndevos) 
------
commit eb4bd2444531ad0d347c574e8341afbba8bf143d
Author: Soumya Koduri <skoduri>
Date:   Tue May 24 17:42:06 2016 +0530

    gfapi/upcall: Use GF_CALLOC while allocating variables
    
    In 'glfs_h_poll_cache_invalidation', use GF_CALLOC to allocate
    'up_inode_arg' to set memory accounting which is used/referred when
    freeing the same variable in case of any erros.
    
    This is backport of below mainline fix -
             http://review.gluster.org/14521
    
    Change-Id: I365e114fa6d7abb292dacb6fc702128d046df8f8
    BUG: 1339228
    Signed-off-by: Soumya Koduri <skoduri>
    Reviewed-on: http://review.gluster.org/14523
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Reviewed-by: Niels de Vos <ndevos>
    Smoke: Gluster Build System <jenkins.com>
    CentOS-regression: Gluster Build System <jenkins.com>

Comment 4 Niels de Vos 2016-06-16 14:08:26 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user