Bug 1309238 - Issues with refresh-config when the ".export_added" has different values on different nodes
Summary: Issues with refresh-config when the ".export_added" has different values on d...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: common-ha
Version: 3.7.7
Hardware: All
OS: All
medium
medium
Target Milestone: ---
Assignee: Soumya Koduri
QA Contact:
URL:
Whiteboard:
Depends On: 1301542
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-02-17 10:08 UTC by Soumya Koduri
Modified: 2016-06-16 13:57 UTC (History)
10 users (show)

Fixed In Version: glusterfs-3.8rc2
Clone Of: 1301542
Environment:
Last Closed: 2016-06-16 13:57:48 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Soumya Koduri 2016-02-17 10:09:47 UTC
The issue here is that 
* CLI (enabling ganesha.enable on) increments .export_added value on it localhost and exports that volume using that export_id generated
where as
* refresh-config just copies the export_files from the nodes it is being executed to other nodes in the cluster. It then unexports the volume with the export_id as in the first node and re-exports it with the new export file.

So in case of customer setup, both the nodes had different value.
on node1: .export_added value was 52
on node2: 37

Now for a new volume, if the option ganesha.enable is enabled, on node1 it shall be successful with export_id 53 but on node2 as there could be already volumes exported with export_id 38 it fails. 
But when refresh-config is executed, it tries to unexport the volume with id 53 on node2 which yet again will fail but the following add_export (re-export) will succeed once we copy the export_config file from node1.

Comment 2 Vijay Bellur 2016-02-17 10:17:08 UTC
REVIEW: http://review.gluster.org/13459 (ganesha: Read export_id on each node while performing refresh-config) posted (#1) for review on release-3.7 by soumya k (skoduri)

Comment 3 Vijay Bellur 2016-02-26 07:46:01 UTC
REVIEW: http://review.gluster.org/13459 (ganesha: Read export_id on each node while performing refresh-config) posted (#2) for review on release-3.7 by soumya k (skoduri)

Comment 4 Vijay Bellur 2016-02-28 10:54:36 UTC
REVIEW: http://review.gluster.org/13459 (ganesha: Read export_id on each node while performing refresh-config) posted (#3) for review on release-3.7 by soumya k (skoduri)

Comment 5 Vijay Bellur 2016-02-28 20:14:39 UTC
COMMIT: http://review.gluster.org/13459 committed in release-3.7 by Kaleb KEITHLEY (kkeithle) 
------
commit e0e633cdce7586af92490730257ed7f0cffcff61
Author: Soumya Koduri <skoduri>
Date:   Wed Feb 17 15:34:44 2016 +0530

    ganesha: Read export_id on each node while performing refresh-config
    
    As mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=1309238#c1,
    there could be cases which shall result in having different ExportIDs
    for the same volume on each node forming the ganesha cluster.
    
    Hence during refresh-config, it is necessary to read the ExportID on
    each of those nodes and re-export that volume with the same ID.
    
    Change-Id: I44058352fe977ccc649d378da3b68bbfb992fcd7
    BUG: 1309238
    Signed-off-by: Soumya Koduri <skoduri>
    Reviewed-on: http://review.gluster.org/13459
    CentOS-regression: Gluster Build System <jenkins.com>
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Reviewed-by: Kaleb KEITHLEY <kkeithle>

Comment 6 Vijay Bellur 2016-03-14 11:45:46 UTC
REVIEW: http://review.gluster.org/13726 (ganesha: Read export_id on each node while performing refresh-config) posted (#1) for review on master by Kaleb KEITHLEY (kkeithle)

Comment 7 Vijay Bellur 2016-03-15 04:36:32 UTC
REVIEW: http://review.gluster.org/13726 (ganesha: Read export_id on each node while performing refresh-config) posted (#2) for review on master by Kaleb KEITHLEY (kkeithle)

Comment 8 Vijay Bellur 2016-03-15 07:25:26 UTC
COMMIT: http://review.gluster.org/13726 committed in master by Kaleb KEITHLEY (kkeithle) 
------
commit ef1b79a86714e235a7430e2eb95acceb83cfc774
Author: Soumya Koduri <skoduri>
Date:   Wed Feb 17 15:34:44 2016 +0530

    ganesha: Read export_id on each node while performing refresh-config
    
    As mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=1309238#c1,
    there could be cases which shall result in having different ExportIDs
    for the same volume on each node forming the ganesha cluster.
    
    Hence during refresh-config, it is necessary to read the ExportID on
    each of those nodes and re-export that volume with the same ID.
    
    BUG: 1309238
    Change-Id: Id39b3a0ce2614ee611282ff2bee04cede1fc129d
    Signed-off-by: Soumya Koduri <skoduri>
    Reviewed-on: http://review.gluster.org/13459
    CentOS-regression: Gluster Build System <jenkins.com>
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Reviewed-by: Kaleb KEITHLEY <kkeithle>
    Reviewed-on: http://review.gluster.org/13726
    Tested-by: Kaleb KEITHLEY <kkeithle>

Comment 9 Kaushal 2016-04-19 07:22:46 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.9, please open a new bug report.

glusterfs-3.7.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-users/2016-March/025922.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 10 Niels de Vos 2016-06-16 13:57:48 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.