Bug 1422576 - [RFE]: provide an cli option to reset the stime while deleting the geo-rep session
Summary: [RFE]: provide an cli option to reset the stime while deleting the geo-rep se...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication
Version: rhgs-3.2
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: RHGS 3.2.0
Assignee: Aravinda VK
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On:
Blocks: 1351503
TreeView+ depends on / blocked
 
Reported: 2017-02-15 15:31 UTC by Rahul Hinduja
Modified: 2017-03-23 06:05 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.8.4-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-23 06:05:22 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description Rahul Hinduja 2017-02-15 15:31:16 UTC
Description of problem:
=======================

Scenario mentioned in bug 1205162 has introduced an option while deleting the geo-rep session. 

gluster volume geo-replication <MASTERVOL> <SLAVE>::<SLAVEVOL> delete \ [reset-sync-time]

This RFE is to track this cli option. 

Patch checkout message:
=======================

To avoid testing against directory specific stime, the remote stime is assumed to be minus_infinity, if the root directory stime is set to (0,0), before the directory scan begins. This triggers a full volume resync to slave in the case of a geo-rep session recreation with the same master-slave volume pair. Command synopsis: gluster volume geo-replication <MASTERVOL> <SLAVE>::<SLAVEVOL> delete \ [reset-sync-time] Update gluster cli man page to include new sub-command reset-sync-time.

Comment 3 Atin Mukherjee 2017-02-16 09:16:16 UTC
Upstream mainline : http://review.gluster.org/14051
Upstream 3.8 : http://review.gluster.org/14953

And the fix is available in rhgs-3.2.0 as part of rebase to GlusterFS 3.8.4.

Comment 5 Rahul Hinduja 2017-02-20 16:57:46 UTC
Based on the agreement, this feature has gone limited testing for 3.2.0. Following scenario's are covered:

1. Create Master and Slave cluster/volume
2. Create geo-rep session between master and slave
3. Create some data on master:

crefi -T 10 -n 10 --multi -d 5 -b 5 --random --max=5K --min=1K --f=create /mnt/master/

AND,

mkdir data; cd data ; for i in {1..999}; do dd if=/dev/zero of=dd.$i bs=1M count=1 ; done

4. Let the data be synced to slave. 
5. Stop and delete the geo-rep session using reset-sync-time
6. remove the data created by crefi from slave mount
7. Append the data on master for the file in data
8. Recreate geo-rep session using force
9. Start the geo-rep session

Result: Files do properly get sync to slave and arequal matches.

10. Stop and delete the geo-rep session again using reset-sync-time
11. remove the complete data from slave (rm -rf *)
12. Recreate geo-rep session
13. Start the geo-rep session

Result: Files do properly get sync to slave and arequal matches.


14. Stop and delete the geo-rep session again using reset-sync-time
15. remove the complete data from master and slave (rm -rf *)
16. Recreate the data on master
17. set the change_detector to xsync
18. Recreate geo-rep session
19. Start the geo-rep session


Result: Files do properly get sync via hybrid crawl to slave and arequal matches.

20. Stop and delete the geo-rep session again using reset-sync-time
21. remove the complete data from master and slave (rm -rf *)
22. set the change_detector to xsync
23. Recreate geo-rep session
24. Start the geo-rep session


Result: Files do properly get sync via hybrid crawl to slave and arequal matches.


CLI Validation:
===============

[root@dhcp42-7 brick1]# gluster volume geo-replication master 10.70.43.249::slave stop
Stopping geo-replication session between master & 10.70.43.249::slave has been successful
[root@dhcp42-7 brick1]# gluster volume geo-replication master 10.70.43.249::slave delete reset
Command type not found while handling geo-replication options
geo-replication command failed
[root@dhcp42-7 brick1]# gluster volume geo-replication master 10.70.43.249::slave delete reset-sync
Command type not found while handling geo-replication options
geo-replication command failed
[root@dhcp42-7 brick1]# gluster volume geo-replication master 10.70.43.249::slave delete reset-sync-TIME
Command type not found while handling geo-replication options
geo-replication command failed
[root@dhcp42-7 brick1]# gluster volume geo-replication master 10.70.43.249::slave delete reset-sync-tim
Command type not found while handling geo-replication options
geo-replication command failed
[root@dhcp42-7 brick1]# gluster volume geo-replication master 10.70.43.249::slave delete \!reset-sync-time
Command type not found while handling geo-replication options
geo-replication command failed
[root@dhcp42-7 brick1]# gluster volume geo-replication master 10.70.43.249::slave delete Reset-sync-time
Command type not found while handling geo-replication options
geo-replication command failed
[root@dhcp42-7 brick1]# gluster volume geo-replication master 10.70.43.249::slave delete reset-sync-time
Deleting geo-replication session between master & 10.70.43.249::slave has been successful
[root@dhcp42-7 brick1]# 


Based on the above result moving this bug to verified state.

Comment 7 errata-xmlrpc 2017-03-23 06:05:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.