Bug 1336801 - ganesha exported volumes doesn't get synced up on shutdown node when it comes up.
Summary: ganesha exported volumes doesn't get synced up on shutdown node when it comes...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: ganesha-nfs
Version: 3.8.0
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: Jiffin
QA Contact:
URL:
Whiteboard:
Depends On: 1327195 1330097
Blocks: 1333661
TreeView+ depends on / blocked
 
Reported: 2016-05-17 13:16 UTC by Jiffin
Modified: 2016-06-16 14:06 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.8rc2
Clone Of: 1330097
Environment:
Last Closed: 2016-06-16 14:06:58 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Vijay Bellur 2016-05-17 13:18:04 UTC
REVIEW: http://review.gluster.org/14397 (glusterd-ganesha : copy ganesha export configuration files during reboot) posted (#1) for review on release-3.8 by jiffin tony Thottan (jthottan)

Comment 2 Vijay Bellur 2016-05-18 13:55:42 UTC
COMMIT: http://review.gluster.org/14397 committed in release-3.8 by Kaleb KEITHLEY (kkeithle) 
------
commit e0ef957c34e4f49afc486dc8f02c8b703206be40
Author: Jiffin Tony Thottan <jthottan>
Date:   Mon Apr 18 21:34:32 2016 +0530

    glusterd-ganesha : copy ganesha export configuration files during reboot
    
    glusterd creates export conf file for ganesha using hook script during
    volume start and ganesha_manage_export() for volume set command. But this
    routine is not added in glusterd restart scenario.
    Consider the following case, in a three node cluster a volume got exported
    via ganesha while one of the node is offline(glusterd is not running).
    When the node comes back online, that volume is not exported on that node
    due to the above mentioned issue.
    Also I have removed unused variables from glusterd_handle_ganesha_op()
    For this patch to work pcs cluster should running on that be node.
    
    Upstream reference
    >Change-Id: I5b2312c2f3cef962b1f795b9f16c8f0a27f08ee5
    >BUG: 1330097
    >Signed-off-by: Jiffin Tony Thottan <jthottan>
    >Reviewed-on: http://review.gluster.org/14063
    >Smoke: Gluster Build System <jenkins.com>
    >NetBSD-regression: NetBSD Build System <jenkins.org>
    >CentOS-regression: Gluster Build System <jenkins.com>
    >Reviewed-by: soumya k <skoduri>
    >Reviewed-by: Atin Mukherjee <amukherj>
    >(cherry picked from commit f71e2fa49af185779b9f43e146effd122d4e9da0)
    
    Change-Id: I5b2312c2f3cef962b1f795b9f16c8f0a27f08ee5
    BUG: 1336801
    Signed-off-by: Jiffin Tony Thottan <jthottan>
    Reviewed-on: http://review.gluster.org/14397
    Smoke: Gluster Build System <jenkins.com>
    Reviewed-by: Kaleb KEITHLEY <kkeithle>
    Tested-by: Kaleb KEITHLEY <kkeithle>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>

Comment 3 Niels de Vos 2016-06-16 14:06:58 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.