Bug 1468863

Summary: Assert in mem_pools_fini during libgfapi-fini-hang.t on NetBSD
Product: [Community] GlusterFS Reporter: Jeff Darcy <jeff>
Component: coreAssignee: Jeff Darcy <jeff>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs, ndevos
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.12.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-09-05 17:36:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Jeff Darcy 2017-07-09 03:29:14 UTC
The test deliberately does glfs_fini (which calls mem_pools_fini) without doing glfs_init (which calls mem_pools_init) first.  This trips an assert which is there to prevent a counter underflow.

Comment 1 Worker Ant 2017-07-09 03:30:22 UTC
REVIEW: https://review.gluster.org/17728 (gfapi+libglusterfs: fix mem_pools_fini without mem_pools_init case) posted (#2) for review on master by Jeff Darcy (jeff.us)

Comment 2 Worker Ant 2017-07-09 03:57:30 UTC
COMMIT: https://review.gluster.org/17728 committed in master by Jeff Darcy (jeff.us) 
------
commit 028d82b8a2434cb6d5ad707500f6dea2125ea2fa
Author: Jeff Darcy <jdarcy>
Date:   Fri Jul 7 07:49:45 2017 -0700

    gfapi+libglusterfs: fix mem_pools_fini without mem_pools_init case
    
    The change consists of two parts: make sure it doesn't happen (in
    glfs.c), and make it harmless if it does (in mem-pool.c).
    
    Change-Id: Icb7dda7a45dd3d1ade2ee3991bb6a22c8ec88424
    BUG: 1468863
    Signed-off-by: Jeff Darcy <jdarcy>
    Reviewed-on: https://review.gluster.org/17728
    Tested-by: Jeff Darcy <jeff.us>
    CentOS-regression: Gluster Build System <jenkins.org>
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: Jeff Darcy <jeff.us>

Comment 3 Worker Ant 2017-07-10 10:17:06 UTC
REVIEW: https://review.gluster.org/17734 (gfapi: prevent mem-pool leak in case glfs_new_fs() fails) posted (#1) for review on master by Niels de Vos (ndevos)

Comment 4 Worker Ant 2017-07-10 10:23:44 UTC
REVIEW: https://review.gluster.org/17734 (gfapi: prevent mem-pool leak in case glfs_new_fs() fails) posted (#2) for review on master by Niels de Vos (ndevos)

Comment 5 Worker Ant 2017-07-10 10:24:17 UTC
REVIEW: https://review.gluster.org/17734 (gfapi: prevent mem-pool leak in case glfs_new_fs() fails) posted (#3) for review on master by Niels de Vos (ndevos)

Comment 6 Worker Ant 2017-07-11 13:21:42 UTC
REVIEW: https://review.gluster.org/17734 (gfapi: prevent mem-pool leak in case glfs_new_fs() fails) posted (#4) for review on master by Niels de Vos (ndevos)

Comment 7 Worker Ant 2017-07-12 09:01:16 UTC
COMMIT: https://review.gluster.org/17734 committed in master by Niels de Vos (ndevos) 
------
commit a4a417e29c5b2d63e6bf5efae4f0ccf30a39647f
Author: Niels de Vos <ndevos>
Date:   Mon Jul 10 11:45:31 2017 +0200

    gfapi: prevent mem-pool leak in case glfs_new_fs() fails
    
    Commit 7039243e187 adds a call to mem_pools_init() so that the memory
    pool cleanup thread ("sweeper") is started. However, now it is possible
    that users of gfapi can not cleanup this thread because glfs_new() can
    return NULL, but the sweeper is still running.
    
    In case glfs_fs_new() fails, mem_pools_fini() needs to be called as
    well. This seems more correct than calling mem_pools_init() after
    glfs_fs_new(), and this makes using memory pools possible *really* early
    in the gfapi initialization.
    
    Change-Id: I1f2fb25cc33e227b3c33ce9d1b03f67bc27e981a
    Fixes: 7039243e187 ("gfapi: add mem_pools_init and mem_pools_fini calls")
    BUG: 1468863
    Signed-off-by: Niels de Vos <ndevos>
    Reviewed-on: https://review.gluster.org/17734
    Reviewed-by: Jeff Darcy <jeff.us>
    Reviewed-by: Vijay Bellur <vbellur>
    CentOS-regression: Gluster Build System <jenkins.org>
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: soumya k <skoduri>
    Reviewed-by: Amar Tumballi <amarts>

Comment 8 Shyamsundar 2017-09-05 17:36:29 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.0, please open a new bug report.

glusterfs-3.12.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-September/000082.html
[2] https://www.gluster.org/pipermail/gluster-users/