Description of problem: ======================= Scenario mentioned in bug 1205162 has introduced an option while deleting the geo-rep session. gluster volume geo-replication <MASTERVOL> <SLAVE>::<SLAVEVOL> delete \ [reset-sync-time] This RFE is to track this cli option. Patch checkout message: ======================= To avoid testing against directory specific stime, the remote stime is assumed to be minus_infinity, if the root directory stime is set to (0,0), before the directory scan begins. This triggers a full volume resync to slave in the case of a geo-rep session recreation with the same master-slave volume pair. Command synopsis: gluster volume geo-replication <MASTERVOL> <SLAVE>::<SLAVEVOL> delete \ [reset-sync-time] Update gluster cli man page to include new sub-command reset-sync-time.
Upstream mainline : http://review.gluster.org/14051 Upstream 3.8 : http://review.gluster.org/14953 And the fix is available in rhgs-3.2.0 as part of rebase to GlusterFS 3.8.4.
Based on the agreement, this feature has gone limited testing for 3.2.0. Following scenario's are covered: 1. Create Master and Slave cluster/volume 2. Create geo-rep session between master and slave 3. Create some data on master: crefi -T 10 -n 10 --multi -d 5 -b 5 --random --max=5K --min=1K --f=create /mnt/master/ AND, mkdir data; cd data ; for i in {1..999}; do dd if=/dev/zero of=dd.$i bs=1M count=1 ; done 4. Let the data be synced to slave. 5. Stop and delete the geo-rep session using reset-sync-time 6. remove the data created by crefi from slave mount 7. Append the data on master for the file in data 8. Recreate geo-rep session using force 9. Start the geo-rep session Result: Files do properly get sync to slave and arequal matches. 10. Stop and delete the geo-rep session again using reset-sync-time 11. remove the complete data from slave (rm -rf *) 12. Recreate geo-rep session 13. Start the geo-rep session Result: Files do properly get sync to slave and arequal matches. 14. Stop and delete the geo-rep session again using reset-sync-time 15. remove the complete data from master and slave (rm -rf *) 16. Recreate the data on master 17. set the change_detector to xsync 18. Recreate geo-rep session 19. Start the geo-rep session Result: Files do properly get sync via hybrid crawl to slave and arequal matches. 20. Stop and delete the geo-rep session again using reset-sync-time 21. remove the complete data from master and slave (rm -rf *) 22. set the change_detector to xsync 23. Recreate geo-rep session 24. Start the geo-rep session Result: Files do properly get sync via hybrid crawl to slave and arequal matches. CLI Validation: =============== [root@dhcp42-7 brick1]# gluster volume geo-replication master 10.70.43.249::slave stop Stopping geo-replication session between master & 10.70.43.249::slave has been successful [root@dhcp42-7 brick1]# gluster volume geo-replication master 10.70.43.249::slave delete reset Command type not found while handling geo-replication options geo-replication command failed [root@dhcp42-7 brick1]# gluster volume geo-replication master 10.70.43.249::slave delete reset-sync Command type not found while handling geo-replication options geo-replication command failed [root@dhcp42-7 brick1]# gluster volume geo-replication master 10.70.43.249::slave delete reset-sync-TIME Command type not found while handling geo-replication options geo-replication command failed [root@dhcp42-7 brick1]# gluster volume geo-replication master 10.70.43.249::slave delete reset-sync-tim Command type not found while handling geo-replication options geo-replication command failed [root@dhcp42-7 brick1]# gluster volume geo-replication master 10.70.43.249::slave delete \!reset-sync-time Command type not found while handling geo-replication options geo-replication command failed [root@dhcp42-7 brick1]# gluster volume geo-replication master 10.70.43.249::slave delete Reset-sync-time Command type not found while handling geo-replication options geo-replication command failed [root@dhcp42-7 brick1]# gluster volume geo-replication master 10.70.43.249::slave delete reset-sync-time Deleting geo-replication session between master & 10.70.43.249::slave has been successful [root@dhcp42-7 brick1]# Based on the above result moving this bug to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html