Bug 996412 - Dist-geo-rep : when user starts geo rep session between master and slave cluster, default change_detector should be changelog not xsync
Summary: Dist-geo-rep : when user starts geo rep session between master and slave clus...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: 2.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: amainkar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-13 06:46 UTC by Rachana Patel
Modified: 2015-04-20 11:58 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.4.0.33rhs
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-11-27 15:31:04 UTC
Embargoed:


Attachments (Terms of Use)
slave (2.54 MB, text/plain)
2013-08-13 06:55 UTC, Rachana Patel
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:1769 0 normal SHIPPED_LIVE Red Hat Storage 2.1 enhancement and bug fix update #1 2013-11-27 20:17:39 UTC

Description Rachana Patel 2013-08-13 06:46:14 UTC
Description of problem:
 Dist-geo-rep : when user starts geo rep session between master and slave cluster, default change_detector should be changelog not xsync

Version-Release number of selected component (if applicable):
3.4.0.18rhs-1.el6rhs.x86_64

How reproducible:
always

Steps to Reproduce:
1. installed ISO - RHS-2.1-20130806.n.2 and upgraded rpm to 3.4.0.18rhs-1.el6rhs.x86_64
2. created master and slave cluster
3. on master cluster created 2 volumes - master1 and master2 and created data from mount point
[root@rhs-client22 ~]# df -h /mnt/master1
Filesystem            Size  Used Avail Use% Mounted on
10.70.37.41:/master1  250G  250G     0 100% /mnt/master1
[root@rhs-client22 ~]# df -h /mnt/master2
Filesystem            Size  Used Avail Use% Mounted on
10.70.37.41:/master2  150G  117G   34G  78% /mnt/master2


4. created geo rep session for this two volume
[root@DVM6 ~]# gluster volume geo master1 rhsauto018.lab.eng.blr.redhat.com::slave1 status
NODE                           MASTER     SLAVE                                        HEALTH    UPTIME         
------------------------------------------------------------------------------------------------------------
DVM6.lab.eng.blr.redhat.com    master1    rhsauto018.lab.eng.blr.redhat.com::slave1    Stable    10:41:57       
DVM2.lab.eng.blr.redhat.com    master1    rhsauto018.lab.eng.blr.redhat.com::slave1    Stable    10:41:56       
DVM3.lab.eng.blr.redhat.com    master1    rhsauto018.lab.eng.blr.redhat.com::slave1    Stable    10:41:57       
DVM4.lab.eng.blr.redhat.com    master1    rhsauto018.lab.eng.blr.redhat.com::slave1    Stable    10:41:57       
DVM5.lab.eng.blr.redhat.com    master1    rhsauto018.lab.eng.blr.redhat.com::slave1    Stable    10:41:57       
[root@DVM6 ~]# gluster volume geo master2 rhsauto018.lab.eng.blr.redhat.com::slave2 status
NODE                           MASTER     SLAVE                                        HEALTH    UPTIME         
------------------------------------------------------------------------------------------------------------
DVM6.lab.eng.blr.redhat.com    master2    rhsauto018.lab.eng.blr.redhat.com::slave2    Stable    10:41:53       
DVM3.lab.eng.blr.redhat.com    master2    rhsauto018.lab.eng.blr.redhat.com::slave2    Stable    10:41:53       
DVM5.lab.eng.blr.redhat.com    master2    rhsauto018.lab.eng.blr.redhat.com::slave2    Stable    10:41:53       
DVM2.lab.eng.blr.redhat.com    master2    rhsauto018.lab.eng.blr.redhat.com::slave2    Stable    10:41:53       
DVM4.lab.eng.blr.redhat.com    master2    rhsauto018.lab.eng.blr.redhat.com::slave2    Stable    10:41:53

5. verified volume info - changelog is on or not
[root@DVM6 ~]# gluster v info
 
Volume Name: master1
Type: Distribute
Volume ID: 7b652aeb-24df-448b-a23f-253020680f75
Status: Started
Number of Bricks: 5
Transport-type: tcp
Bricks:
Brick1: 10.70.37.41:/rhs/brick1
Brick2: 10.70.37.151:/rhs/brick1
Brick3: 10.70.37.105:/rhs/brick1
Brick4: 10.70.37.191:/rhs/brick1
Brick5: 10.70.37.43:/rhs/brick1
Options Reconfigured:
geo-replication.indexing: on
changelog.changelog: on

 
Volume Name: master2
Type: Distributed-Replicate
Volume ID: 454def1a-5990-47b4-9218-bda22c620160
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.37.41:/rhs/brick2
Brick2: 10.70.37.151:/rhs/brick2
Brick3: 10.70.37.105:/rhs/brick2
Brick4: 10.70.37.191:/rhs/brick2
Brick5: 10.70.37.43:/rhs/brick2
Brick6: 10.70.37.43:/rhs/brick5/1
Options Reconfigured:
geo-replication.indexing: on
changelog.changelog: on

6.  check the change-detector
[root@DVM6 ~]# gluster volume geo master2 rhsauto018.lab.eng.blr.redhat.com::slave2 config | grep change
change_detector: xsync
[root@DVM6 ~]# gluster volume geo master1 rhsauto018.lab.eng.blr.redhat.com::slave1 config | grep change
change_detector: xsync


Actual results:
It's always xsync.

Expected results:
Should be changelog

Additional info:
stop this session and change it to changelog using config and started session again.
still it falls back to xsync

log:-
[2013-08-10 15:38:53.86857] I [monitor(monitor):81:set_state] Monitor: new state: Initializing...
[2013-08-10 15:38:53.91924] I [monitor(monitor):129:monitor] Monitor: ------------------------------------------------------------
[2013-08-10 15:38:53.92241] I [monitor(monitor):130:monitor] Monitor: starting gsyncd worker
[2013-08-10 15:38:53.248121] I [gsyncd(/rhs/brick1):503:main_i] <top>: syncing: gluster://localhost:master1 -> ssh://root
b.eng.blr.redhat.com:gluster://localhost:slave1
[2013-08-10 15:38:56.59674] I [master(/rhs/brick1):61:gmaster_builder] <top>: setting up xsync change detection mode
[2013-08-10 15:38:56.69093] I [master(/rhs/brick1):61:gmaster_builder] <top>: setting up changelog change detection mode
[2013-08-10 15:38:56.73467] I [master(/rhs/brick1):863:register] _GMaster: xsync temp directory: /var/run/gluster/master1/ssh%3A%2F%2Fr
oot%4010.70.36.248%3Agluster%3A%2F%2F127.0.0.1%3Aslave1/85acebcd7c65ee7c4550f76de44279a9/xsync
[2013-08-10 15:39:06.142046] I [master(/rhs/brick1):648:fallback_xsync] _GMaster: falling back to xsync mode
[2013-08-10 15:39:06.147505] I [syncdutils(/rhs/brick1):158:finalize] <top>: exiting.
[2013-08-10 15:39:07.74258] I [monitor(monitor):81:set_state] Monitor: new state: faulty
[2013-08-10 15:39:17.88939] I [monitor(monitor):129:monitor] Monitor: ------------------------------------------------------------
[2013-08-10 15:39:17.89445] I [monitor(monitor):130:monitor] Monitor: starting gsyncd worker
[2013-08-10 15:39:17.272384] I [gsyncd(/rhs/brick1):503:main_i] <top>: syncing: gluster://localhost:master1 -> ssh://root
b.eng.blr.redhat.com:gluster://localhost:slave1
[2013-08-10 15:39:19.915723] I [master(/rhs/brick1):61:gmaster_builder] <top>: setting up xsync change detection mode
[2013-08-10 15:39:19.917727] I [master(/rhs/brick1):61:gmaster_builder] <top>: setting up xsync change detection mode
[2013-08-10 15:39:19.920238] I [master(/rhs/brick1):863:register] _GMaster: xsync temp directory: /var/run/gluster/master1/ssh%3A%2F%2F
root%4010.70.36.248%3Agluster%3A%2F%2F127.0.0.1%3Aslave1/85acebcd7c65ee7c4550f76de44279a9/xsync

Comment 2 Rachana Patel 2013-08-13 06:55:48 UTC
Created attachment 786014 [details]
slave

Comment 5 Gowrishankar Rajaiyan 2013-09-25 14:01:54 UTC
Fixed in version and the commit URL please.

Comment 6 Vijaykumar Koppad 2013-09-30 11:51:14 UTC
verified in glusterfs-3.4.0.33rhs

Comment 8 errata-xmlrpc 2013-11-27 15:31:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1769.html


Note You need to log in before you can comment on or make changes to this bug.