+++ This bug was initially created as a clone of Bug #1658050 +++ Description of problem: ----------------------- Ganesha crashed after setting 'ganesha.enable' to 'on' on volume which is not started. Crash observed on all nodes in the cluster. ============================================================================= Version-Release number of selected component (if applicable): ------------------------------------------------------------- # rpm -qa | grep ganesha glusterfs-ganesha-3.12.2-29.el7rhgs.x86_64 nfs-ganesha-2.5.5-10.el7rhgs.x86_64 nfs-ganesha-gluster-2.5.5-10.el7rhgs.x86_64 ============================================================================== How reproducible: ----------------- 2/2 ============================================================================= Steps to Reproduce: ------------------ 1. Create a 6 node ganesha cluster. 2. Create a volume 'testvol'. Do not start the volume. 3. Set volume option 'ganesha.enable' to 'on' in 'testvol'. 4. Observe ganesha crash after sometime. ============================================================================ Actual results: --------------- nfs-ganesha crashed on all nodes. ============================================================================= Expected results: ----------------- nfs-ganesha should not get crashed. ============================================================================= Additional info: The initialization of glusterfs client happens twice for nfs-ganesha. One via mgmt_rpc_notify() (the normal path for gfapi) and other with mgmt_cbk_spec() (callback send from glusterd at the end of volume set command) So two io threads will be created. If the volume is not started, the glfs_fini is destroy only one of the threads, leaving the context of another thread invalid and leads to crash. If the volume is in started state, post init init_export_root->mdcache_lookup_path->lookup->..->priv_glfs_active_subvol() finds out there is oldsubvol and sends notify on oldsubvol with PARENT_DOWN event so that the iot thread created first will be destroyed. If the volume is not started the init will fail, so no lookup path will be send post t
REVIEW: https://review.gluster.org/22062 (graph: deactivate existing graph in glusterfs_graph_activate()) posted (#2) for review on master by jiffin tony Thottan
This bug is moved to https://github.com/gluster/glusterfs/issues/1034, and will be tracked there from now on. Visit GitHub issues URL for further details