Description of problem: When geo-rep session is deleted with the option reset-sync-time and the data on slave is deleted, the recreation of session with same old slave volume is not syncing data from master to slave. It is observed, it happens only if the data to slave is synced via xsync before deletion of the geo-rep session. Only entries under root are being synced. Version-Release number of selected component (if applicable): mainline How reproducible: Always Steps to Reproduce: 1. Create geo-rep session between 'master' and 'slave' volume. 2. Set change-detector to 'xsync' 3. Create data on master and let it sync to slave 4. Delete geo-rep session with 'reset-rsync-time' 5. Delete data on slave volume. 6. Recreate geo-rep session with same master and slave volume 7. Start the geo-rep Actual results: Only first level entries under root are being synced and rest are not synced. Expected results: It is expected all the data from master is synced again. Additional info:
Patch posted: https://review.gluster.org/#/c/16629/1
REVIEW: https://review.gluster.org/16629 (geo-rep: Fix xsync crawl) posted (#2) for review on master by Kotresh HR (khiremat)
COMMIT: https://review.gluster.org/16629 committed in master by Aravinda VK (avishwan) ------ commit 267578ec0d6b29483a1bd402165ea8c388ad825e Author: Kotresh HR <khiremat> Date: Wed Feb 15 03:44:17 2017 -0500 geo-rep: Fix xsync crawl If stime is set to (0, 0) on master brick root, it is expected to do complete sync ignoring the stime set on sub directories. But while initializing the stime variable for comparison, it was initailized to (-1, 0) instead of (0, 0). Fixed the same. The stime is set to (0, 0) with the 'reset-sync-time' option while deleting session. 'gluster vol geo-rep master fedora1::slave delete reset-sync-time' The scenario happens when geo-rep session is deleted as above and for some reason the session is re-established with same slave volume after deleting data on slave volume. Change-Id: Ie5bc8f008dead637a09495adeef5577e2b33bc90 BUG: 1422760 Signed-off-by: Kotresh HR <khiremat> Reviewed-on: https://review.gluster.org/16629 NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Smoke: Gluster Build System <jenkins.org> Reviewed-by: Aravinda VK <avishwan>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report. glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html [2] https://www.gluster.org/pipermail/gluster-users/