Bug 1309238 - Issues with refresh-config when the ".export_added" has different values on different nodes
Issues with refresh-config when the ".export_added" has different values on d...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: common-ha (Show other bugs)
3.7.7
All All
medium Severity medium
: ---
: ---
Assigned To: Soumya Koduri
: Reopened, Triaged
Depends On: 1301542
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-17 05:08 EST by Soumya Koduri
Modified: 2016-06-16 09:57 EDT (History)
10 users (show)

See Also:
Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1301542
Environment:
Last Closed: 2016-06-16 09:57:48 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Comment 1 Soumya Koduri 2016-02-17 05:09:47 EST
The issue here is that 
* CLI (enabling ganesha.enable on) increments .export_added value on it localhost and exports that volume using that export_id generated
where as
* refresh-config just copies the export_files from the nodes it is being executed to other nodes in the cluster. It then unexports the volume with the export_id as in the first node and re-exports it with the new export file.

So in case of customer setup, both the nodes had different value.
on node1: .export_added value was 52
on node2: 37

Now for a new volume, if the option ganesha.enable is enabled, on node1 it shall be successful with export_id 53 but on node2 as there could be already volumes exported with export_id 38 it fails. 
But when refresh-config is executed, it tries to unexport the volume with id 53 on node2 which yet again will fail but the following add_export (re-export) will succeed once we copy the export_config file from node1.
Comment 2 Vijay Bellur 2016-02-17 05:17:08 EST
REVIEW: http://review.gluster.org/13459 (ganesha: Read export_id on each node while performing refresh-config) posted (#1) for review on release-3.7 by soumya k (skoduri@redhat.com)
Comment 3 Vijay Bellur 2016-02-26 02:46:01 EST
REVIEW: http://review.gluster.org/13459 (ganesha: Read export_id on each node while performing refresh-config) posted (#2) for review on release-3.7 by soumya k (skoduri@redhat.com)
Comment 4 Vijay Bellur 2016-02-28 05:54:36 EST
REVIEW: http://review.gluster.org/13459 (ganesha: Read export_id on each node while performing refresh-config) posted (#3) for review on release-3.7 by soumya k (skoduri@redhat.com)
Comment 5 Vijay Bellur 2016-02-28 15:14:39 EST
COMMIT: http://review.gluster.org/13459 committed in release-3.7 by Kaleb KEITHLEY (kkeithle@redhat.com) 
------
commit e0e633cdce7586af92490730257ed7f0cffcff61
Author: Soumya Koduri <skoduri@redhat.com>
Date:   Wed Feb 17 15:34:44 2016 +0530

    ganesha: Read export_id on each node while performing refresh-config
    
    As mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=1309238#c1,
    there could be cases which shall result in having different ExportIDs
    for the same volume on each node forming the ganesha cluster.
    
    Hence during refresh-config, it is necessary to read the ExportID on
    each of those nodes and re-export that volume with the same ID.
    
    Change-Id: I44058352fe977ccc649d378da3b68bbfb992fcd7
    BUG: 1309238
    Signed-off-by: Soumya Koduri <skoduri@redhat.com>
    Reviewed-on: http://review.gluster.org/13459
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Comment 6 Vijay Bellur 2016-03-14 07:45:46 EDT
REVIEW: http://review.gluster.org/13726 (ganesha: Read export_id on each node while performing refresh-config) posted (#1) for review on master by Kaleb KEITHLEY (kkeithle@redhat.com)
Comment 7 Vijay Bellur 2016-03-15 00:36:32 EDT
REVIEW: http://review.gluster.org/13726 (ganesha: Read export_id on each node while performing refresh-config) posted (#2) for review on master by Kaleb KEITHLEY (kkeithle@redhat.com)
Comment 8 Vijay Bellur 2016-03-15 03:25:26 EDT
COMMIT: http://review.gluster.org/13726 committed in master by Kaleb KEITHLEY (kkeithle@redhat.com) 
------
commit ef1b79a86714e235a7430e2eb95acceb83cfc774
Author: Soumya Koduri <skoduri@redhat.com>
Date:   Wed Feb 17 15:34:44 2016 +0530

    ganesha: Read export_id on each node while performing refresh-config
    
    As mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=1309238#c1,
    there could be cases which shall result in having different ExportIDs
    for the same volume on each node forming the ganesha cluster.
    
    Hence during refresh-config, it is necessary to read the ExportID on
    each of those nodes and re-export that volume with the same ID.
    
    BUG: 1309238
    Change-Id: Id39b3a0ce2614ee611282ff2bee04cede1fc129d
    Signed-off-by: Soumya Koduri <skoduri@redhat.com>
    Reviewed-on: http://review.gluster.org/13459
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
    Reviewed-on: http://review.gluster.org/13726
    Tested-by: Kaleb KEITHLEY <kkeithle@redhat.com>
Comment 9 Kaushal 2016-04-19 03:22:46 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.9, please open a new bug report.

glusterfs-3.7.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-users/2016-March/025922.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user
Comment 10 Niels de Vos 2016-06-16 09:57:48 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.