Bug 1309238
Summary: | Issues with refresh-config when the ".export_added" has different values on different nodes | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Soumya Koduri <skoduri> |
Component: | common-ha | Assignee: | Soumya Koduri <skoduri> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 3.7.7 | CC: | akhakhar, jthottan, kkeithle, ndevos, nlevinki, olim, rnalakka, smohan, storage-qa-internal, vbellur |
Target Milestone: | --- | Keywords: | Reopened, Triaged |
Target Release: | --- | ||
Hardware: | All | ||
OS: | All | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.8rc2 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | 1301542 | Environment: | |
Last Closed: | 2016-06-16 13:57:48 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1301542 | ||
Bug Blocks: |
Comment 1
Soumya Koduri
2016-02-17 10:09:47 UTC
REVIEW: http://review.gluster.org/13459 (ganesha: Read export_id on each node while performing refresh-config) posted (#1) for review on release-3.7 by soumya k (skoduri) REVIEW: http://review.gluster.org/13459 (ganesha: Read export_id on each node while performing refresh-config) posted (#2) for review on release-3.7 by soumya k (skoduri) REVIEW: http://review.gluster.org/13459 (ganesha: Read export_id on each node while performing refresh-config) posted (#3) for review on release-3.7 by soumya k (skoduri) COMMIT: http://review.gluster.org/13459 committed in release-3.7 by Kaleb KEITHLEY (kkeithle) ------ commit e0e633cdce7586af92490730257ed7f0cffcff61 Author: Soumya Koduri <skoduri> Date: Wed Feb 17 15:34:44 2016 +0530 ganesha: Read export_id on each node while performing refresh-config As mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=1309238#c1, there could be cases which shall result in having different ExportIDs for the same volume on each node forming the ganesha cluster. Hence during refresh-config, it is necessary to read the ExportID on each of those nodes and re-export that volume with the same ID. Change-Id: I44058352fe977ccc649d378da3b68bbfb992fcd7 BUG: 1309238 Signed-off-by: Soumya Koduri <skoduri> Reviewed-on: http://review.gluster.org/13459 CentOS-regression: Gluster Build System <jenkins.com> Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Kaleb KEITHLEY <kkeithle> REVIEW: http://review.gluster.org/13726 (ganesha: Read export_id on each node while performing refresh-config) posted (#1) for review on master by Kaleb KEITHLEY (kkeithle) REVIEW: http://review.gluster.org/13726 (ganesha: Read export_id on each node while performing refresh-config) posted (#2) for review on master by Kaleb KEITHLEY (kkeithle) COMMIT: http://review.gluster.org/13726 committed in master by Kaleb KEITHLEY (kkeithle) ------ commit ef1b79a86714e235a7430e2eb95acceb83cfc774 Author: Soumya Koduri <skoduri> Date: Wed Feb 17 15:34:44 2016 +0530 ganesha: Read export_id on each node while performing refresh-config As mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=1309238#c1, there could be cases which shall result in having different ExportIDs for the same volume on each node forming the ganesha cluster. Hence during refresh-config, it is necessary to read the ExportID on each of those nodes and re-export that volume with the same ID. BUG: 1309238 Change-Id: Id39b3a0ce2614ee611282ff2bee04cede1fc129d Signed-off-by: Soumya Koduri <skoduri> Reviewed-on: http://review.gluster.org/13459 CentOS-regression: Gluster Build System <jenkins.com> Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Kaleb KEITHLEY <kkeithle> Reviewed-on: http://review.gluster.org/13726 Tested-by: Kaleb KEITHLEY <kkeithle> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.9, please open a new bug report. glusterfs-3.7.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-users/2016-March/025922.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user |