Bug 1422811
Summary: | [Geo-rep] Recreating geo-rep session with same slave after deleting with reset-sync-time fails to sync | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Kotresh HR <khiremat> |
Component: | geo-replication | Assignee: | Kotresh HR <khiremat> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.8 | CC: | bugs |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.8.10 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1422760 | Environment: | |
Last Closed: | 2017-03-18 10:52:09 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1422760, 1422818, 1422819 | ||
Bug Blocks: |
Description
Kotresh HR
2017-02-16 10:30:20 UTC
REVIEW: https://review.gluster.org/16641 (geo-rep: Fix xsync crawl) posted (#1) for review on release-3.8 by Kotresh HR (khiremat) COMMIT: https://review.gluster.org/16641 committed in release-3.8 by Aravinda VK (avishwan) ------ commit 6c5e9542a62f0dee3758fb262d6101c43414010d Author: Kotresh HR <khiremat> Date: Wed Feb 15 03:44:17 2017 -0500 geo-rep: Fix xsync crawl If stime is set to (0, 0) on master brick root, it is expected to do complete sync ignoring the stime set on sub directories. But while initializing the stime variable for comparison, it was initailized to (-1, 0) instead of (0, 0). Fixed the same. The stime is set to (0, 0) with the 'reset-sync-time' option while deleting session. 'gluster vol geo-rep master fedora1::slave delete reset-sync-time' The scenario happens when geo-rep session is deleted as above and for some reason the session is re-established with same slave volume after deleting data on slave volume. > Change-Id: Ie5bc8f008dead637a09495adeef5577e2b33bc90 > BUG: 1422760 > Signed-off-by: Kotresh HR <khiremat> > Reviewed-on: https://review.gluster.org/16629 > NetBSD-regression: NetBSD Build System <jenkins.org> > CentOS-regression: Gluster Build System <jenkins.org> > Smoke: Gluster Build System <jenkins.org> > Reviewed-by: Aravinda VK <avishwan> Change-Id: Ie5bc8f008dead637a09495adeef5577e2b33bc90 BUG: 1422811 Signed-off-by: Kotresh HR <khiremat> (cherry picked from commit 267578ec0d6b29483a1bd402165ea8c388ad825e) Reviewed-on: https://review.gluster.org/16641 Smoke: Gluster Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Aravinda VK <avishwan> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.10, please open a new bug report. glusterfs-3.8.10 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-March/000068.html [2] https://www.gluster.org/pipermail/gluster-users/ |