Bug 1238699 - [geo-rep]: geo-rep start after snapshot restore makes the geo-rep faulty and no sync will happen
Summary: [geo-rep]: geo-rep start after snapshot restore makes the geo-rep faulty and ...
Keywords:
Status: ASSIGNED
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
medium
urgent
Target Milestone: ---
: ---
Assignee: Shwetha K Acharya
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
: 1569612 1713233 (view as bug list)
Depends On:
Blocks: 1216951 1238540
TreeView+ depends on / blocked
 
Reported: 2015-07-02 12:44 UTC by Rahul Hinduja
Modified: 2020-01-08 02:24 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
The Changelog History API expects brick path to remain the same for a session. However, on snapshot restore, brick path is changed. This causes the History API to fail and geo-replication status to change to Faulty. Workaround: 1. After the snapshot restore, ensure the master and slave volumes are stopped. 2. Backup the htime directory(of master volume): cp -a <brick_htime_path> <backup_path> NOTE: -a option is important to preserve extended attributes. For example: cp -a /var/run/gluster/snaps/a4e2c4647cf642f68d0f8259b43494c0/brick0/b0/.glusterfs/changeslogs/htime /opt/backup_htime/brick0_b0 3. Run the following command to replace the OLD path in the htime file(s) with the new brick path: find <new_brick_htime_path> - name 'HTIME.*' -print0 | \ xargs -0 sed -ci 's|<OLD_BRICK_PATH>|<NEW_BRICK_PATH>|g' where OLD_BRICK_PATH is the brick path of the gluster volume before snapshot restore, and NEW_BRICK_PATH is the brick path "after" snapshot restore. For example: find /var/run/gluster/snaps/a4e2c4647cf642f68d0f8259b43494c0/brick0/b0/.glusterfs/changelogs/htime/ -name 'HTIME.*' -print0 | \ xargs -0 sed -ci 's|/bricks/brick0/b0/|/var/run/gluster/snaps/a4e2c4647cf642f68d0f8259b43494c0/brick0/b0/|g' 4. Start the Master and Slave Volumes and geo-replication session on the restored volume. The status should update to Active
Clone Of:
Environment:
Last Closed: 2018-04-16 15:57:07 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Rahul Hinduja 2015-07-02 12:44:09 UTC
Description of problem:
======================

We support the snapshot on the geo-rep setup with some additional steps like pausing/stopping the session before creating or restoring the snapshot. This is in the RHGS guide and fully supported as it is in documentation. 

But, once the volumes {master and slave} are restored. The default status becomes NA from stopped and starting geo-rep session goes to faulty.

[2015-07-02 18:10:45.90424] I [master(/var/run/gluster/snaps/b8a4073db05840b6b212253d23b4b102/brick1/b1):519:crawlwrap] _GMaster: primary master 
with volume id 8ed23eb5-4494-4f76-b43a-5c225b5ac2e8 ...
[2015-07-02 18:10:45.97951] I [master(/var/run/gluster/snaps/b8a4073db05840b6b212253d23b4b102/brick1/b1):528:crawlwrap] _GMaster: crawl interval:
 1 seconds
[2015-07-02 18:10:45.111330] I [master(/var/run/gluster/snaps/b8a4073db05840b6b212253d23b4b102/brick1/b1):1123:crawl] _GMaster: starting history 
crawl... turns: 1, stime: (1435839078, 0)
[2015-07-02 18:10:45.112297] E [repce(agent):117:worker] <top>: call failed: 
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 113, in worker
    res = getattr(self.obj, rmeth)(*in_data[2:])
  File "/usr/libexec/glusterfs/python/syncdaemon/changelogagent.py", line 54, in history
    num_parallel)
  File "/usr/libexec/glusterfs/python/syncdaemon/libgfchangelog.py", line 100, in cl_history_changelog
    cls.raise_changelog_err()
  File "/usr/libexec/glusterfs/python/syncdaemon/libgfchangelog.py", line 27, in raise_changelog_err
    raise ChangelogException(errn, os.strerror(errn))
ChangelogException: [Errno 2] No such file or directory


Version-Release number of selected component (if applicable):
=============================================================

glusterfs-3.7.1-6.el6rhs.x86_64

How reproducible:
=================

Always

Steps to Reproduce:
====================
1. Pause/Stop the geo-rep session between master and slave
2. Stop slave and Master volume
3. Restore Slave volume, and then restore master volume
4. Start geo-rep session

Actual results:
===============

geo-rep session becomes faulty

Expected results:
=================

geo-rep should be active

Comment 10 monti lawrence 2015-07-22 21:03:54 UTC
Doc text is edited. Please sign off to be included in Known Issues.

Comment 11 Saravanakumar 2015-07-23 10:00:57 UTC
Workaround:

1.
Ensure the current volume is stopped.

2.
Backup htime directory(of the current volume) first.

command to use:
cp -a <brick_htime_path> <backup_path>

	For example:
	cp -a /opt/volume_test/tv_1/b1/.glusterfs/changeslogs/htime  /opt/backup_htime/

Please Note: -a option is important to preserve extended attributes. 

3. 
Carry out snapshot restore.

4.
After snapshot restore, run the following command.

Command to use:
find <brick_htime_path> - name 'HTIME.*' -print0  | \
xargs -0 sed -ci 's|<OLD_BRICK_PATH>|<NEW_BRICK_PATH>|g'

	Here,OLD_BRICK_PATH is the brick path of the current volume,
	NEW_BRICK_PATH is the brick path "after" snapshot restore.

	For example:

	find /opt/volume_test/tv_1/b1/.glusterfs/changelogs/htime/ -name 'HTIME.*' -print0  | \
	xargs -0 sed -ci 's|/opt/volume_test/tv_1/b1/|/opt/volume_test/tv_1/b1.1/|g'

htime files are updated with updated brick path.

5. 
Now, geo-rep session can be started for this restored volume.

Comment 12 Saravanakumar 2015-07-23 13:39:00 UTC
(In reply to Saravanakumar from comment #11)
> 4.
> After snapshot restore, run the following command.
> 
> Command to use:
> find <brick_htime_path> - name 'HTIME.*' -print0  | \
> xargs -0 sed -ci 's|<OLD_BRICK_PATH>|<NEW_BRICK_PATH>|g'
> 
> 	Here,OLD_BRICK_PATH is the brick path of the current volume,
> 	NEW_BRICK_PATH is the brick path "after" snapshot restore.
> 
> 	For example:
> 
> 	find /opt/volume_test/tv_1/b1/.glusterfs/changelogs/htime/ -name 'HTIME.*'
> -print0  | \
> 	xargs -0 sed -ci 's|/opt/volume_test/tv_1/b1/|/opt/volume_test/tv_1/b1.1/|g'
> 
> htime files are updated with updated brick path.

There is a typo mistake here, 
find <brick_htime_path> should be NEW_BRICK_HTIME_PATH.

Giving complete workaround, with corrections:

1.
Ensure the current volume is stopped.

2.
Backup htime directory(of the current volume) first.

command to use:
cp -a <brick_htime_path> <backup_path>

	For example:
	cp -a /opt/volume_test/tv_1/b1/.glusterfs/changeslogs/htime  /opt/backup_htime/

Please Note: -a option is important to preserve extended attributes. 

3. 
Carry out snapshot restore.

4.
After snapshot restore, run the following command.

Command to use:
find <new_brick_htime_path> - name 'HTIME.*' -print0  | \
xargs -0 sed -ci 's|<OLD_BRICK_PATH>|<NEW_BRICK_PATH>|g'

	Here,OLD_BRICK_PATH is the brick path of the current volume,
	NEW_BRICK_PATH is the brick path "after" snapshot restore.

	For example:

	find /opt/volume_test/tv_1/b1.1/.glusterfs/changelogs/htime/ -name 'HTIME.*' -print0  | \
	xargs -0 sed -ci 's|/opt/volume_test/tv_1/b1/|/opt/volume_test/tv_1/b1.1/|g'

htime files are updated with updated brick path.

5. 
Now, geo-rep session can be started for this restored volume.(after starting both slave and master volumes respectively).

Comment 13 Saravanakumar 2015-07-23 13:40:56 UTC
WITHOUT WORKAROUND - COMPLETE LOGS:

[root@gfvm2 ~]# glusterd -LDEBUG
[root@gfvm2 ~]# gluster volume create tv1 gfvm2:/rhs/brick1/b1
[root@gfvm2 ~]# gluster volume create tv2 gfvm2:/rhs/brick2/b2
[root@gfvm2 ~]# 
[root@gfvm2 ~]# 
[root@gfvm2 ~]# gluster volume status tv1
Status of volume: tv1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gfvm2:/rhs/brick1/b1                  49156     0          Y       14229
NFS Server on localhost                     N/A       N/A        N       N/A  
 
Task Status of Volume tv1
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@gfvm2 ~]# gluster volume status tv2
Volume tv2 is not started
[root@gfvm2 ~]# gluster volume start tv2
volume start: tv2: success
[root@gfvm2 ~]# 
[root@gfvm2 ~]# gluster volume status 
Status of volume: tv1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gfvm2:/rhs/brick1/b1                  49156     0          Y       14229
NFS Server on localhost                     N/A       N/A        N       N/A  
 
Task Status of Volume tv1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: tv2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gfvm2:/rhs/brick2/b2                  49157     0          Y       14291
NFS Server on localhost                     N/A       N/A        N       N/A  
 
Task Status of Volume tv2
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@gfvm2 ~]# mount -t glusterfs gfvm2:/tv1 /mnt/master/
[root@gfvm2 ~]# 
[root@gfvm2 ~]# 
[root@gfvm2 ~]# mount -t glusterfs gfvm2:/tv2 /mnt/slave
[root@gfvm2 ~]# gluster volume geo-replication tv1 gfvm2::tv2 create push-pem force
Creating geo-replication session between tv1 & gfvm2::tv2 has been successful
[root@gfvm2 ~]# 
[root@gfvm2 ~]# 
[root@gfvm2 ~]# gluster volume geo-replication tv1 gfvm2::tv2 start
Starting geo-replication session between tv1 & gfvm2::tv2 has been successful
[root@gfvm2 ~]# gluster volume geo-replication tv1 gfvm2::tv2 status 
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE         SLAVE NODE    STATUS             CRAWL STATUS    LAST_SYNCED          
--------------------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /rhs/brick1/b1    root          gfvm2::tv2    N/A           Initializing...    N/A             N/A                  
[root@gfvm2 ~]# 
[root@gfvm2 ~]# 
[root@gfvm2 ~]# gluster volume geo-replication tv1 gfvm2::tv2 status 
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE         SLAVE NODE    STATUS    CRAWL STATUS       LAST_SYNCED          
--------------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /rhs/brick1/b1    root          gfvm2::tv2    gfvm2         Active    Changelog Crawl    N/A                  
[root@gfvm2 ~]# 
[root@gfvm2 ~]# 
[root@gfvm2 ~]# 
[root@gfvm2 ~]# cd /mnt/master/
[root@gfvm2 master]# ls
[root@gfvm2 master]# touch file{1..100}
[root@gfvm2 master]# 

[root@gfvm2 master]# sleep 15
ls


ls /mnt/slave/
[root@gfvm2 master]# ls
file1    file16  file23  file30  file38  file45  file52  file6   file67  file74  file81  file89  file96
file10   file17  file24  file31  file39  file46  file53  file60  file68  file75  file82  file9   file97
file100  file18  file25  file32  file4   file47  file54  file61  file69  file76  file83  file90  file98
file11   file19  file26  file33  file40  file48  file55  file62  file7   file77  file84  file91  file99
file12   file2   file27  file34  file41  file49  file56  file63  file70  file78  file85  file92
file13   file20  file28  file35  file42  file5   file57  file64  file71  file79  file86  file93
file14   file21  file29  file36  file43  file50  file58  file65  file72  file8   file87  file94
file15   file22  file3   file37  file44  file51  file59  file66  file73  file80  file88  file95
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# ls /mnt/slave/
file1    file16  file23  file30  file38  file45  file52  file6   file67  file74  file81  file89  file96
file10   file17  file24  file31  file39  file46  file53  file60  file68  file75  file82  file9   file97
file100  file18  file25  file32  file4   file47  file54  file61  file69  file76  file83  file90  file98
file11   file19  file26  file33  file40  file48  file55  file62  file7   file77  file84  file91  file99
file12   file2   file27  file34  file41  file49  file56  file63  file70  file78  file85  file92
file13   file20  file28  file35  file42  file5   file57  file64  file71  file79  file86  file93
file14   file21  file29  file36  file43  file50  file58  file65  file72  file8   file87  file94
file15   file22  file3   file37  file44  file51  file59  file66  file73  file80  file88  file95
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume status
Status of volume: tv1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gfvm2:/rhs/brick1/b1                  49156     0          Y       14229
NFS Server on localhost                     N/A       N/A        N       N/A  
 
Task Status of Volume tv1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: tv2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gfvm2:/rhs/brick2/b2                  49157     0          Y       14291
NFS Server on localhost                     N/A       N/A        N       N/A  
 
Task Status of Volume tv2
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume geo-replication tv1 gfvm2::tv2 status 
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE         SLAVE NODE    STATUS    CRAWL STATUS       LAST_SYNCED                  
----------------------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /rhs/brick1/b1    root          gfvm2::tv2    gfvm2         Active    Changelog Crawl    2015-07-23 18:11:10          
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume geo-replication tv1 gfvm2::tv2 pause
Pausing geo-replication session between tv1 & gfvm2::tv2 has been successful
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume geo-replication tv1 gfvm2::tv2 status 
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE         SLAVE NODE    STATUS    CRAWL STATUS    LAST_SYNCED          
-----------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /rhs/brick1/b1    root          gfvm2::tv2    N/A           Paused    N/A             N/A                  
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster snapshot create snap_mv tv1  
snapshot create: success: Snap snap_mv_GMT-2015.07.23-12.58.25 created successfully
[root@gfvm2 master]# gluster snapshot list
snap_mv_GMT-2015.07.23-12.58.25
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster snapshot create snap_sv tv2 
snapshot create: success: Snap snap_sv_GMT-2015.07.23-12.58.55 created successfully
[root@gfvm2 master]# gluster snapshot list
snap_mv_GMT-2015.07.23-12.58.25
snap_sv_GMT-2015.07.23-12.58.55
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume geo-replication tv1 gfvm2::tv2 status 
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE         SLAVE NODE    STATUS    CRAWL STATUS    LAST_SYNCED          
-----------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /rhs/brick1/b1    root          gfvm2::tv2    N/A           Paused    N/A             N/A                  


[root@gfvm2 master]# gluster volume geo-replication tv1 gfvm2::tv2 resume
Resuming geo-replication session between tv1 & gfvm2::tv2 has been successful
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# ls /boot/
config-3.17.4-301.fc21.x86_64
grub2
initramfs-0-rescue-5dfdc57e2fcb401db9e1edea7c903318.img
initramfs-3.17.4-301.fc21.x86_64.img
initrd-plymouth.img
lost+found
System.map-3.17.4-301.fc21.x86_64
vmlinuz-0-rescue-5dfdc57e2fcb401db9e1edea7c903318
vmlinuz-3.17.4-301.fc21.x86_64
[root@gfvm2 master]# cp /boot/System.map-3.17.4-301.fc21.x86_64  /boot/config-3.17.4-301.fc21.x86_64  . 
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# ls
config-3.17.4-301.fc21.x86_64  file21  file35  file49  file62  file76  file9
file1                          file22  file36  file5   file63  file77  file90
file10                         file23  file37  file50  file64  file78  file91
file100                        file24  file38  file51  file65  file79  file92
file11                         file25  file39  file52  file66  file8   file93
file12                         file26  file4   file53  file67  file80  file94
file13                         file27  file40  file54  file68  file81  file95
file14                         file28  file41  file55  file69  file82  file96
file15                         file29  file42  file56  file7   file83  file97
file16                         file3   file43  file57  file70  file84  file98
file17                         file30  file44  file58  file71  file85  file99
file18                         file31  file45  file59  file72  file86  System.map-3.17.4-301.fc21.x86_64
file19                         file32  file46  file6   file73  file87
file2                          file33  file47  file60  file74  file88
file20                         file34  file48  file61  file75  file89
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume geo-replication tv1 gfvm2::tv2 status  
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE         SLAVE NODE    STATUS    CRAWL STATUS     LAST_SYNCED                  
--------------------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /rhs/brick1/b1    root          gfvm2::tv2    gfvm2         Active    History Crawl    2015-07-23 18:11:10          
[root@gfvm2 master]# ls /mnt/slave/
file1    file16  file23  file30  file38  file45  file52  file6   file67  file74  file81  file89  file96
file10   file17  file24  file31  file39  file46  file53  file60  file68  file75  file82  file9   file97
file100  file18  file25  file32  file4   file47  file54  file61  file69  file76  file83  file90  file98
file11   file19  file26  file33  file40  file48  file55  file62  file7   file77  file84  file91  file99
file12   file2   file27  file34  file41  file49  file56  file63  file70  file78  file85  file92
file13   file20  file28  file35  file42  file5   file57  file64  file71  file79  file86  file93
file14   file21  file29  file36  file43  file50  file58  file65  file72  file8   file87  file94
file15   file22  file3   file37  file44  file51  file59  file66  file73  file80  file88  file95
[root@gfvm2 master]# sleep 15;ls /mnt/slave/
config-3.17.4-301.fc21.x86_64  file21  file35  file49  file62  file76  file9
file1                          file22  file36  file5   file63  file77  file90
file10                         file23  file37  file50  file64  file78  file91
file100                        file24  file38  file51  file65  file79  file92
file11                         file25  file39  file52  file66  file8   file93
file12                         file26  file4   file53  file67  file80  file94
file13                         file27  file40  file54  file68  file81  file95
file14                         file28  file41  file55  file69  file82  file96
file15                         file29  file42  file56  file7   file83  file97
file16                         file3   file43  file57  file70  file84  file98
file17                         file30  file44  file58  file71  file85  file99
file18                         file31  file45  file59  file72  file86  System.map-3.17.4-301.fc21.x86_64
file19                         file32  file46  file6   file73  file87
file2                          file33  file47  file60  file74  file88
file20                         file34  file48  file61  file75  file89
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume geo-replication tv1 gfvm2::tv2 stop 
Stopping geo-replication session between tv1 & gfvm2::tv2 has been successful
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume stop tv2 
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: tv2: success
[root@gfvm2 master]# gluster volume stop tv1 
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: tv1: success
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster  snapshot restore  snap_sv_GMT-2015.07.23-12.58.55 
Restore operation will replace the original volume with the snapshotted volume. Do you still want to continue? (y/n) y
Snapshot restore: snap_sv_GMT-2015.07.23-12.58.55: Snap restored successfully
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster snapshot restore  snap_mv_GMT-2015.07.23-12.58.25 
Restore operation will replace the original volume with the snapshotted volume. Do you still want to continue? (y/n) y
Snapshot restore: snap_mv_GMT-2015.07.23-12.58.25: Snap restored successfully
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume info tv1
 
Volume Name: tv1
Type: Distribute
Volume ID: d05d2708-c5ab-4f21-8958-dc8187884776
Status: Stopped
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gfvm2:/run/gluster/snaps/18fff7e7ecae4d1299a935fa730293ee/brick1/b1
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
performance.readdir-ahead: on
[root@gfvm2 master]# gluster volume info tv2
 
Volume Name: tv2
Type: Distribute
Volume ID: ade449ed-158c-4aed-a125-9e960a32c75a
Status: Stopped
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gfvm2:/run/gluster/snaps/d1573ded3a0448b6826f56755c13d0aa/brick1/b2
Options Reconfigured:
performance.readdir-ahead: on
[root@gfvm2 master]# mount | grep gluster
gfvm2:/tv1 on /mnt/master type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
gfvm2:/tv2 on /mnt/slave type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
/dev/mapper/RHS_vg1-18fff7e7ecae4d1299a935fa730293ee_0 on /run/gluster/snaps/18fff7e7ecae4d1299a935fa730293ee/brick1 type xfs (rw,noatime,nodiratime,nouuid,attr2,inode64,logbsize=128k,sunit=256,swidth=2560,noquota)
/dev/mapper/RHS_vg2-d1573ded3a0448b6826f56755c13d0aa_0 on /run/gluster/snaps/d1573ded3a0448b6826f56755c13d0aa/brick1 type xfs (rw,noatime,nodiratime,nouuid,attr2,inode64,logbsize=128k,sunit=256,swidth=2560,noquota)
[root@gfvm2 master]# gluster volume start tv2 
volume start: tv2: success
[root@gfvm2 master]# gluster volume start tv1 
volume start: tv1: success
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume geo-replication tv1 gfvm2::tv2 status
 
MASTER NODE    MASTER VOL    MASTER BRICK                                                     SLAVE USER    SLAVE         SLAVE NODE    STATUS    CRAWL STATUS    LAST_SYNCED          
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /run/gluster/snaps/18fff7e7ecae4d1299a935fa730293ee/brick1/b1    root          gfvm2::tv2    N/A           N/A       N/A             N/A                  
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume geo-replication tv1 gfvm2::tv2 start
Starting geo-replication session between tv1 & gfvm2::tv2 has been successful
[root@gfvm2 master]# gluster volume geo-replication tv1 gfvm2::tv2 status
 
MASTER NODE    MASTER VOL    MASTER BRICK                                                     SLAVE USER    SLAVE         SLAVE NODE    STATUS             CRAWL STATUS    LAST_SYNCED          
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /run/gluster/snaps/18fff7e7ecae4d1299a935fa730293ee/brick1/b1    root          gfvm2::tv2    N/A           Initializing...    N/A             N/A                  
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume geo-replication tv1 gfvm2::tv2 status
 
MASTER NODE    MASTER VOL    MASTER BRICK                                                     SLAVE USER    SLAVE         SLAVE NODE    STATUS             CRAWL STATUS    LAST_SYNCED          
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /run/gluster/snaps/18fff7e7ecae4d1299a935fa730293ee/brick1/b1    root          gfvm2::tv2    N/A           Initializing...    N/A             N/A                  
[root@gfvm2 master]# gluster volume geo-replication tv1 gfvm2::tv2 status
 
MASTER NODE    MASTER VOL    MASTER BRICK                                                     SLAVE USER    SLAVE         SLAVE NODE    STATUS    CRAWL STATUS    LAST_SYNCED          
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /run/gluster/snaps/18fff7e7ecae4d1299a935fa730293ee/brick1/b1    root          gfvm2::tv2    N/A           Faulty    N/A             N/A                  
[root@gfvm2 master]# gluster volume geo-replication tv1 gfvm2::tv2 status
 
MASTER NODE    MASTER VOL    MASTER BRICK                                                     SLAVE USER    SLAVE         SLAVE NODE    STATUS    CRAWL STATUS    LAST_SYNCED          
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /run/gluster/snaps/18fff7e7ecae4d1299a935fa730293ee/brick1/b1    root          gfvm2::tv2    N/A           Faulty    N/A             N/A                  
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# cd /run/gluster/snaps/18fff7e7ecae4d1299a935fa730293ee/brick1
[root@gfvm2 brick1]# ls -a
.  ..  b1
[root@gfvm2 brick1]# cd b1/
[root@gfvm2 b1]# ls
file1    file16  file23  file30  file38  file45  file52  file6   file67  file74  file81  file89  file96
file10   file17  file24  file31  file39  file46  file53  file60  file68  file75  file82  file9   file97
file100  file18  file25  file32  file4   file47  file54  file61  file69  file76  file83  file90  file98
file11   file19  file26  file33  file40  file48  file55  file62  file7   file77  file84  file91  file99
file12   file2   file27  file34  file41  file49  file56  file63  file70  file78  file85  file92
file13   file20  file28  file35  file42  file5   file57  file64  file71  file79  file86  file93
file14   file21  file29  file36  file43  file50  file58  file65  file72  file8   file87  file94
file15   file22  file3   file37  file44  file51  file59  file66  file73  file80  file88  file95
[root@gfvm2 b1]# cd .glusterfs/
[root@gfvm2 .glusterfs]# cd changelogs/
[root@gfvm2 changelogs]# ls
CHANGELOG  CHANGELOG.1437655226  CHANGELOG.1437655271  CHANGELOG.1437656589  csnap  htime
[root@gfvm2 changelogs]# cat htime/HTIME.1437655211 
/rhs/brick1/b1/.glusterfs/changelogs/CHANGELOG.1437655226/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655241/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655256/rhs/brick1/b1/.glusterfs/changelogs/CHANGELOG.1437655271/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655286/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655301/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655316/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655331/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655346/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655361/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655376/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655391/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655406/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655421/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655436/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655451/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655466/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655481/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655496/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655511/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655526/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655541/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655556/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655571/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655586/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655602/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655617/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655632/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655647/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655662/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655677/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655692/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655707/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655722/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655737/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655752/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655767/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655782/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655797/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655812/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655827/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655842/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655857/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655872/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655887/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655902/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655917/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655932/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655947/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655962/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655977/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437655992/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656007/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656022/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656037/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656052/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656068/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656083/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656098/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656113/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656128/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656143/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656158/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656173/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656188/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656203/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656218/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656233/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656248/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656263/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656278/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656293/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437656306/run/gluster/snaps/18fff7e7ecae4d1299a935fa730293ee/brick1/b1/.glusterfs/changelogs/CHANGELOG.1437656589/run/gluster/snaps/18fff7e7ecae4d1299a935fa730293ee/brick1/b1/.glusterfs/changelogs/changelog.1437656604/run/gluster/snaps/18fff7e7ecae4d1299a935fa730293ee/brick1/b1/.glusterfs/changelogs/changelog.1437656619/run/gluster/snaps/18fff7e7ecae4d1299a935fa730293ee/brick1/b1/.glusterfs/changelogs/changelog.1437656634[root@gfvm2 changelogs]# 
[root@gfvm2 changelogs]# 
[root@gfvm2 changelogs]# gluster volume geo-replication tv1 gfvm2::tv2 status
 
MASTER NODE    MASTER VOL    MASTER BRICK                                                     SLAVE USER    SLAVE         SLAVE NODE    STATUS    CRAWL STATUS    LAST_SYNCED          
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /run/gluster/snaps/18fff7e7ecae4d1299a935fa730293ee/brick1/b1    root          gfvm2::tv2    N/A           Faulty    N/A             N/A                  
[root@gfvm2 changelogs]# 

==============================================================================

Comment 14 Saravanakumar 2015-07-23 13:42:21 UTC
NOW WITH WORKAROUND( as mentioned in comment #12)::

==============================================================================
[root@gfvm2 ~]# 
[root@gfvm2 ~]# gluster volume create tv2 gfvm2:/rhs/brick2/b2
volume create: tv2: success: please start the volume to access data
[root@gfvm2 ~]# gluster volume create tv1 gfvm2:/rhs/brick1/b1 
volume create: tv1: success: please start the volume to access data
[root@gfvm2 ~]# 
[root@gfvm2 ~]# 
[root@gfvm2 ~]# 
[root@gfvm2 ~]# gluster volume start tv1
volume start: tv1: success
[root@gfvm2 ~]# gluster volume start tv2
volume start: tv2: success
[root@gfvm2 ~]# 
[root@gfvm2 ~]# 
[root@gfvm2 ~]# mount -t glusterfs gfvm2:/tv1 /mnt/master
[root@gfvm2 ~]# mount -t glusterfs gfvm2:/tv2 /mnt/slave/
[root@gfvm2 ~]# 
[root@gfvm2 ~]# 
[root@gfvm2 ~]# gluster volume geo-replication tv1 gfvm2::tv2 create push-pem force
Creating geo-replication session between tv1 & gfvm2::tv2 has been successful
[root@gfvm2 ~]# 
[root@gfvm2 ~]# 
[root@gfvm2 ~]# 
[root@gfvm2 ~]# gluster volume geo-replication tv1 gfvm2::tv2 start 
Starting geo-replication session between tv1 & gfvm2::tv2 has been successful
[root@gfvm2 ~]# 
[root@gfvm2 ~]# 
[root@gfvm2 ~]# gluster volume geo-replication tv1 gfvm2::tv2 status 
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE         SLAVE NODE    STATUS             CRAWL STATUS    LAST_SYNCED          
--------------------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /rhs/brick1/b1    root          gfvm2::tv2    N/A           Initializing...    N/A             N/A                  
[root@gfvm2 ~]# gluster volume geo-replication tv1 gfvm2::tv2 status 
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE         SLAVE NODE    STATUS    CRAWL STATUS       LAST_SYNCED          
--------------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /rhs/brick1/b1    root          gfvm2::tv2    gfvm2         Active    Changelog Crawl    N/A                  
[root@gfvm2 ~]# 
[root@gfvm2 ~]# 
[root@gfvm2 ~]# touch FILE_{1..100} 
[root@gfvm2 ~]# rm  FILE_{1..100} 
rm: remove regular empty file ‘FILE_1’? ^C
[root@gfvm2 ~]# rm  FILE_{1..100}  -f
[root@gfvm2 ~]# cd /mnt/master/
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# mount | grep mnt
gfvm2:/tv1 on /mnt/master type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
gfvm2:/tv2 on /mnt/slave type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# touch FILE_{1..100} ; sleep 15;
[root@gfvm2 master]# ls
FILE_1    FILE_18  FILE_27  FILE_36  FILE_45  FILE_54  FILE_63  FILE_72  FILE_81  FILE_90
FILE_10   FILE_19  FILE_28  FILE_37  FILE_46  FILE_55  FILE_64  FILE_73  FILE_82  FILE_91
FILE_100  FILE_2   FILE_29  FILE_38  FILE_47  FILE_56  FILE_65  FILE_74  FILE_83  FILE_92
FILE_11   FILE_20  FILE_3   FILE_39  FILE_48  FILE_57  FILE_66  FILE_75  FILE_84  FILE_93
FILE_12   FILE_21  FILE_30  FILE_4   FILE_49  FILE_58  FILE_67  FILE_76  FILE_85  FILE_94
FILE_13   FILE_22  FILE_31  FILE_40  FILE_5   FILE_59  FILE_68  FILE_77  FILE_86  FILE_95
FILE_14   FILE_23  FILE_32  FILE_41  FILE_50  FILE_6   FILE_69  FILE_78  FILE_87  FILE_96
FILE_15   FILE_24  FILE_33  FILE_42  FILE_51  FILE_60  FILE_7   FILE_79  FILE_88  FILE_97
FILE_16   FILE_25  FILE_34  FILE_43  FILE_52  FILE_61  FILE_70  FILE_8   FILE_89  FILE_98
FILE_17   FILE_26  FILE_35  FILE_44  FILE_53  FILE_62  FILE_71  FILE_80  FILE_9   FILE_99
[root@gfvm2 master]# ls /mnt/slave/
FILE_1    FILE_18  FILE_27  FILE_36  FILE_45  FILE_54  FILE_63  FILE_72  FILE_81  FILE_90
FILE_10   FILE_19  FILE_28  FILE_37  FILE_46  FILE_55  FILE_64  FILE_73  FILE_82  FILE_91
FILE_100  FILE_2   FILE_29  FILE_38  FILE_47  FILE_56  FILE_65  FILE_74  FILE_83  FILE_92
FILE_11   FILE_20  FILE_3   FILE_39  FILE_48  FILE_57  FILE_66  FILE_75  FILE_84  FILE_93
FILE_12   FILE_21  FILE_30  FILE_4   FILE_49  FILE_58  FILE_67  FILE_76  FILE_85  FILE_94
FILE_13   FILE_22  FILE_31  FILE_40  FILE_5   FILE_59  FILE_68  FILE_77  FILE_86  FILE_95
FILE_14   FILE_23  FILE_32  FILE_41  FILE_50  FILE_6   FILE_69  FILE_78  FILE_87  FILE_96
FILE_15   FILE_24  FILE_33  FILE_42  FILE_51  FILE_60  FILE_7   FILE_79  FILE_88  FILE_97
FILE_16   FILE_25  FILE_34  FILE_43  FILE_52  FILE_61  FILE_70  FILE_8   FILE_89  FILE_98
FILE_17   FILE_26  FILE_35  FILE_44  FILE_53  FILE_62  FILE_71  FILE_80  FILE_9   FILE_99
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume status
Status of volume: tv1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gfvm2:/rhs/brick1/b1                  49152     0          Y       19889
NFS Server on localhost                     N/A       N/A        N       N/A  
 
Task Status of Volume tv1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: tv2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gfvm2:/rhs/brick2/b2                  49153     0          Y       19922
NFS Server on localhost                     N/A       N/A        N       N/A  
 
Task Status of Volume tv2
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume geo-replication tv1 gfvm2::tv2 status 
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE         SLAVE NODE    STATUS    CRAWL STATUS       LAST_SYNCED                  
----------------------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /rhs/brick1/b1    root          gfvm2::tv2    gfvm2         Active    Changelog Crawl    2015-07-23 18:47:55          
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume geo-replication tv1 gfvm2::tv2 pause 
Pausing geo-replication session between tv1 & gfvm2::tv2 has been successful
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume geo-replication tv1 gfvm2::tv2 status 
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE         SLAVE NODE    STATUS    CRAWL STATUS    LAST_SYNCED          
-----------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /rhs/brick1/b1    root          gfvm2::tv2    N/A           Paused    N/A             N/A                  
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster snapshot create snap_mv tv1 
snapshot create: success: Snap snap_mv_GMT-2015.07.23-13.18.57 created successfully
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster snapshot list
snap_mv_GMT-2015.07.23-13.18.57
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster snapshot create snap_sv tv2 
snapshot create: success: Snap snap_sv_GMT-2015.07.23-13.19.14 created successfully
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume geo-replication tv1 gfvm2::tv2 status 
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE         SLAVE NODE    STATUS    CRAWL STATUS    LAST_SYNCED          
-----------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /rhs/brick1/b1    root          gfvm2::tv2    N/A           Paused    N/A             N/A                  
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume geo-replication tv1 gfvm2::tv2 resume 
Resuming geo-replication session between tv1 & gfvm2::tv2 has been successful
[root@gfvm2 master]# cp /etc/hosts /etc/hostname  . 
[root@gfvm2 master]# sleep 10; ls ; ls /mnt/slave
FILE_1    FILE_18  FILE_27  FILE_36  FILE_45  FILE_54  FILE_63  FILE_72  FILE_81  FILE_90  hostname
FILE_10   FILE_19  FILE_28  FILE_37  FILE_46  FILE_55  FILE_64  FILE_73  FILE_82  FILE_91  hosts
FILE_100  FILE_2   FILE_29  FILE_38  FILE_47  FILE_56  FILE_65  FILE_74  FILE_83  FILE_92
FILE_11   FILE_20  FILE_3   FILE_39  FILE_48  FILE_57  FILE_66  FILE_75  FILE_84  FILE_93
FILE_12   FILE_21  FILE_30  FILE_4   FILE_49  FILE_58  FILE_67  FILE_76  FILE_85  FILE_94
FILE_13   FILE_22  FILE_31  FILE_40  FILE_5   FILE_59  FILE_68  FILE_77  FILE_86  FILE_95
FILE_14   FILE_23  FILE_32  FILE_41  FILE_50  FILE_6   FILE_69  FILE_78  FILE_87  FILE_96
FILE_15   FILE_24  FILE_33  FILE_42  FILE_51  FILE_60  FILE_7   FILE_79  FILE_88  FILE_97
FILE_16   FILE_25  FILE_34  FILE_43  FILE_52  FILE_61  FILE_70  FILE_8   FILE_89  FILE_98
FILE_17   FILE_26  FILE_35  FILE_44  FILE_53  FILE_62  FILE_71  FILE_80  FILE_9   FILE_99
FILE_1    FILE_18  FILE_27  FILE_36  FILE_45  FILE_54  FILE_63  FILE_72  FILE_81  FILE_90  hostname
FILE_10   FILE_19  FILE_28  FILE_37  FILE_46  FILE_55  FILE_64  FILE_73  FILE_82  FILE_91  hosts
FILE_100  FILE_2   FILE_29  FILE_38  FILE_47  FILE_56  FILE_65  FILE_74  FILE_83  FILE_92
FILE_11   FILE_20  FILE_3   FILE_39  FILE_48  FILE_57  FILE_66  FILE_75  FILE_84  FILE_93
FILE_12   FILE_21  FILE_30  FILE_4   FILE_49  FILE_58  FILE_67  FILE_76  FILE_85  FILE_94
FILE_13   FILE_22  FILE_31  FILE_40  FILE_5   FILE_59  FILE_68  FILE_77  FILE_86  FILE_95
FILE_14   FILE_23  FILE_32  FILE_41  FILE_50  FILE_6   FILE_69  FILE_78  FILE_87  FILE_96
FILE_15   FILE_24  FILE_33  FILE_42  FILE_51  FILE_60  FILE_7   FILE_79  FILE_88  FILE_97
FILE_16   FILE_25  FILE_34  FILE_43  FILE_52  FILE_61  FILE_70  FILE_8   FILE_89  FILE_98
FILE_17   FILE_26  FILE_35  FILE_44  FILE_53  FILE_62  FILE_71  FILE_80  FILE_9   FILE_99
[root@gfvm2 master]# gluster volume geo-replication tv1 gfvm2::tv2 stop 
Stopping geo-replication session between tv1 & gfvm2::tv2 has been successful
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume stop tv2 
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: tv2: success
[root@gfvm2 master]# gluster volume stop tv1 
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: tv1: success
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume info tv1
 
Volume Name: tv1
Type: Distribute
Volume ID: e10771af-ee41-43f6-9bc1-2fd4246c2bb0
Status: Stopped
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gfvm2:/rhs/brick1/b1
Options Reconfigured:
features.barrier: disable
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
performance.readdir-ahead: on
[root@gfvm2 master]# cp -a /rhs/brick1/b1/.glusterfs/
00/           23/           3d/           56/           8f/           b2/           df/
01/           24/           3e/           58/           94/           b3/           e1/
03/           25/           3f/           62/           96/           b4/           e2/
06/           26/           41/           63/           97/           b5/           e4/
07/           27/           46/           69/           9d/           bd/           ea/
0e/           28/           48/           6f/           9f/           c0/           f1/
10/           2a/           49/           71/           a7/           c1/           f2/
12/           2e/           4b/           75/           ac/           c7/           f4/
15/           33/           4c/           7e/           ae/           cb/           f6/
16/           34/           4d/           7f/           b0/           changelogs/   f9/
19/           36/           4f/           80/           b1/           d0/           health_check
1e/           37/           50/           85/           b1.db         d3/           indices/
21/           38/           53/           87/           b1.db-shm     da/           landfill/
22/           3c/           54/           8d/           b1.db-wal     dd/           
[root@gfvm2 master]# cp -a /rhs/brick1/b1/.glusterfs/changelogs/htime/ /root/htime_backup 
[root@gfvm2 master]# ls /root/htime_backup/
HTIME.1437657400
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster snapshot list
snap_mv_GMT-2015.07.23-13.18.57
snap_sv_GMT-2015.07.23-13.19.14
[root@gfvm2 master]# gluster snapshot restore   snap_sv_GMT-2015.07.23-13.19.14 
Restore operation will replace the original volume with the snapshotted volume. Do you still want to continue? (y/n) y
Snapshot restore: snap_sv_GMT-2015.07.23-13.19.14: Snap restored successfully
[root@gfvm2 master]# gluster snapshot restore   snap_mv_GMT-2015.07.23-13.18.57 
Restore operation will replace the original volume with the snapshotted volume. Do you still want to continue? (y/n) y
Snapshot restore: snap_mv_GMT-2015.07.23-13.18.57: Snap restored successfully
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# gluster volume info tv1
 
Volume Name: tv1
Type: Distribute
Volume ID: e10771af-ee41-43f6-9bc1-2fd4246c2bb0
Status: Stopped
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gfvm2:/run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
performance.readdir-ahead: on
[root@gfvm2 master]# gluster volume info tv2
 
Volume Name: tv2
Type: Distribute
Volume ID: 383d3d1c-32da-4786-b4bf-a04baf0671b9
Status: Stopped
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gfvm2:/run/gluster/snaps/d612fa07c003459e92c0fbc2890769a8/brick1/b2
Options Reconfigured:
performance.readdir-ahead: on
[root@gfvm2 master]# ls -a /run/gluster/snaps/d612fa07c003459e92c0fbc2890769a8/brick1/b2
.         FILE_16  FILE_25  FILE_34  FILE_43  FILE_52  FILE_61  FILE_70  FILE_8   FILE_89  FILE_98
..        FILE_17  FILE_26  FILE_35  FILE_44  FILE_53  FILE_62  FILE_71  FILE_80  FILE_9   FILE_99
FILE_1    FILE_18  FILE_27  FILE_36  FILE_45  FILE_54  FILE_63  FILE_72  FILE_81  FILE_90  .glusterfs
FILE_10   FILE_19  FILE_28  FILE_37  FILE_46  FILE_55  FILE_64  FILE_73  FILE_82  FILE_91  .trashcan
FILE_100  FILE_2   FILE_29  FILE_38  FILE_47  FILE_56  FILE_65  FILE_74  FILE_83  FILE_92
FILE_11   FILE_20  FILE_3   FILE_39  FILE_48  FILE_57  FILE_66  FILE_75  FILE_84  FILE_93
FILE_12   FILE_21  FILE_30  FILE_4   FILE_49  FILE_58  FILE_67  FILE_76  FILE_85  FILE_94
FILE_13   FILE_22  FILE_31  FILE_40  FILE_5   FILE_59  FILE_68  FILE_77  FILE_86  FILE_95
FILE_14   FILE_23  FILE_32  FILE_41  FILE_50  FILE_6   FILE_69  FILE_78  FILE_87  FILE_96
FILE_15   FILE_24  FILE_33  FILE_42  FILE_51  FILE_60  FILE_7   FILE_79  FILE_88  FILE_97
[root@gfvm2 master]# ls -a /run/gluster/snaps/d612fa07c003459e92c0fbc2890769a8/brick1/b2/.glusterfs/
.   06  15  22  27  34  3d  48  4f  58  71  85  96  ac  b2.db      b5  cb          dd  f1  health_check
..  07  16  23  28  36  3e  49  50  62  75  87  97  ae  b2.db-shm  bd  changelogs  df  f2  indices
00  0e  19  24  2a  37  3f  4b  53  63  7e  8d  9d  b0  b2.db-wal  c0  d0          e1  f4  landfill
01  10  1e  25  2e  38  41  4c  54  69  7f  8f  9f  b1  b3         c1  d3          e2  f6
03  12  21  26  33  3c  46  4d  56  6f  80  94  a7  b2  b4         c7  da          ea  f9
[root@gfvm2 master]# ls -a /run/gluster/snaps/d612fa07c003459e92c0fbc2890769a8/brick1/b2/.glusterfs/changelogs/
.  ..  csnap  htime
[root@gfvm2 master]# ls -a /run/gluster/snaps/d612fa07c003459e92c0fbc2890769a8/brick1/b2/.glusterfs/changelogs/htime/
.  ..
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# 
[root@gfvm2 master]# cd /run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1
[root@gfvm2 b1]# gluster volume status tv1
Volume tv1 is not started
[root@gfvm2 b1]# gluster volume status tv2
Volume tv2 is not started
[root@gfvm2 b1]# pwd
/run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1
[root@gfvm2 b1]# 
[root@gfvm2 b1]# 
[root@gfvm2 b1]# cd .glusterfs/changelogs/
[root@gfvm2 changelogs]# pwd
/run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1/.glusterfs/changelogs
[root@gfvm2 changelogs]# cd htime/
[root@gfvm2 htime]# pwd
/run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1/.glusterfs/changelogs/htime
[root@gfvm2 htime]# ls
HTIME.1437657400
[root@gfvm2 htime]# cat HTIME.1437657400 
/rhs/brick1/b1/.glusterfs/changelogs/CHANGELOG.1437657415/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437657431/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437657446/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437657461/rhs/brick1/b1/.glusterfs/changelogs/CHANGELOG.1437657476/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437657491/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437657506/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437657521/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437657536/rhs/brick1/b1/.glusterfs/changelogs/changelog.1437657538[root@gfvm2 htime]# 
[root@gfvm2 htime]# 
[root@gfvm2 htime]# 
[root@gfvm2 htime]# 
[root@gfvm2 htime]# 
[root@gfvm2 htime]# 
[root@gfvm2 htime]# 
[root@gfvm2 htime]# 
[root@gfvm2 htime]# 
[root@gfvm2 htime]# pwd
/run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1/.glusterfs/changelogs/htime
[root@gfvm2 htime]# 
[root@gfvm2 htime]# 
[root@gfvm2 htime]# find /run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1/.glusterfs/changelogs/htime/ -name 'HTIME.*' -print0  | \
> xargs -0 sed -ci 's|/rhs/brick1/b1/|/run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1/|g'
[root@gfvm2 htime]# 
[root@gfvm2 htime]# 
[root@gfvm2 htime]# getfattr -d -m . HTIME.1437657400 
# file: HTIME.1437657400
trusted.glusterfs.htime="1437657538:10"

[root@gfvm2 htime]# getfattr -d -m . /root/HTIME.1437657400 
anaconda-ks.cfg         create_file.sh          .lesshst                .tcshrc
.bash_history           create_files.sh         .pki/                   test/
.bash_logout            create_volume.sh        rhs-system-init.sh      thin_pool_commands.txt
.bash_profile           .cshrc                  run_first.sh            trash/
.bashrc                 .gdb_history            scratch/                typescript
brick_content.sh        .gdbinit                scripts/                .viminfo
changelogparser.py      get_fop_count.patch     .smbcredentials         
.config/                htime_backup/           .ssh/                   
[root@gfvm2 htime]# getfattr -d -m . /root/htime_backup/HTIME.1437657400 
getfattr: Removing leading '/' from absolute path names
# file: root/htime_backup/HTIME.1437657400
trusted.glusterfs.htime="1437657629:16"

[root@gfvm2 htime]# 
[root@gfvm2 htime]# 
[root@gfvm2 htime]# pwd
/run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1/.glusterfs/changelogs/htime
[root@gfvm2 htime]# cat HTIME.1437657400 
/run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1/.glusterfs/changelogs/CHANGELOG.1437657415/run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1/.glusterfs/changelogs/changelog.1437657431/run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1/.glusterfs/changelogs/changelog.1437657446/run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1/.glusterfs/changelogs/changelog.1437657461/run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1/.glusterfs/changelogs/CHANGELOG.1437657476/run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1/.glusterfs/changelogs/changelog.1437657491/run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1/.glusterfs/changelogs/changelog.1437657506/run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1/.glusterfs/changelogs/changelog.1437657521/run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1/.glusterfs/changelogs/changelog.1437657536/run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1/.glusterfs/changelogs/changelog.1437657538[root@gfvm2 htime]# 
[root@gfvm2 htime]# 
[root@gfvm2 htime]# gluster volume start tv2 
volume start: tv2: success
[root@gfvm2 htime]# gluster volume start tv1 
volume start: tv1: success
[root@gfvm2 htime]# gluster volume geo-replication tv1 gfvm2::tv2 status
 
MASTER NODE    MASTER VOL    MASTER BRICK                                                     SLAVE USER    SLAVE         SLAVE NODE    STATUS    CRAWL STATUS    LAST_SYNCED          
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1    root          gfvm2::tv2    N/A           N/A       N/A             N/A                  
[root@gfvm2 htime]# 
[root@gfvm2 htime]# 
[root@gfvm2 htime]# gluster volume geo-replication tv1 gfvm2::tv2 start 
Starting geo-replication session between tv1 & gfvm2::tv2 has been successful
[root@gfvm2 htime]# gluster volume geo-replication tv1 gfvm2::tv2 status
 
MASTER NODE    MASTER VOL    MASTER BRICK                                                     SLAVE USER    SLAVE         SLAVE NODE    STATUS             CRAWL STATUS    LAST_SYNCED          
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1    root          gfvm2::tv2    N/A           Initializing...    N/A             N/A                  
[root@gfvm2 htime]# 
[root@gfvm2 htime]# 
[root@gfvm2 htime]# gluster volume geo-replication tv1 gfvm2::tv2 status
 
MASTER NODE    MASTER VOL    MASTER BRICK                                                     SLAVE USER    SLAVE         SLAVE NODE    STATUS    CRAWL STATUS     LAST_SYNCED                  
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1    root          gfvm2::tv2    gfvm2         Active    History Crawl    2015-07-23 18:47:55          
[root@gfvm2 htime]# 
[root@gfvm2 htime]# 
[root@gfvm2 htime]# ls /mnt/master/
FILE_1    FILE_18  FILE_27  FILE_36  FILE_45  FILE_54  FILE_63  FILE_72  FILE_81  FILE_90
FILE_10   FILE_19  FILE_28  FILE_37  FILE_46  FILE_55  FILE_64  FILE_73  FILE_82  FILE_91
FILE_100  FILE_2   FILE_29  FILE_38  FILE_47  FILE_56  FILE_65  FILE_74  FILE_83  FILE_92
FILE_11   FILE_20  FILE_3   FILE_39  FILE_48  FILE_57  FILE_66  FILE_75  FILE_84  FILE_93
FILE_12   FILE_21  FILE_30  FILE_4   FILE_49  FILE_58  FILE_67  FILE_76  FILE_85  FILE_94
FILE_13   FILE_22  FILE_31  FILE_40  FILE_5   FILE_59  FILE_68  FILE_77  FILE_86  FILE_95
FILE_14   FILE_23  FILE_32  FILE_41  FILE_50  FILE_6   FILE_69  FILE_78  FILE_87  FILE_96
FILE_15   FILE_24  FILE_33  FILE_42  FILE_51  FILE_60  FILE_7   FILE_79  FILE_88  FILE_97
FILE_16   FILE_25  FILE_34  FILE_43  FILE_52  FILE_61  FILE_70  FILE_8   FILE_89  FILE_98
FILE_17   FILE_26  FILE_35  FILE_44  FILE_53  FILE_62  FILE_71  FILE_80  FILE_9   FILE_99
[root@gfvm2 htime]# 
[root@gfvm2 htime]# 
[root@gfvm2 htime]# ls /mnt/slave/
FILE_1    FILE_18  FILE_27  FILE_36  FILE_45  FILE_54  FILE_63  FILE_72  FILE_81  FILE_90
FILE_10   FILE_19  FILE_28  FILE_37  FILE_46  FILE_55  FILE_64  FILE_73  FILE_82  FILE_91
FILE_100  FILE_2   FILE_29  FILE_38  FILE_47  FILE_56  FILE_65  FILE_74  FILE_83  FILE_92
FILE_11   FILE_20  FILE_3   FILE_39  FILE_48  FILE_57  FILE_66  FILE_75  FILE_84  FILE_93
FILE_12   FILE_21  FILE_30  FILE_4   FILE_49  FILE_58  FILE_67  FILE_76  FILE_85  FILE_94
FILE_13   FILE_22  FILE_31  FILE_40  FILE_5   FILE_59  FILE_68  FILE_77  FILE_86  FILE_95
FILE_14   FILE_23  FILE_32  FILE_41  FILE_50  FILE_6   FILE_69  FILE_78  FILE_87  FILE_96
FILE_15   FILE_24  FILE_33  FILE_42  FILE_51  FILE_60  FILE_7   FILE_79  FILE_88  FILE_97
FILE_16   FILE_25  FILE_34  FILE_43  FILE_52  FILE_61  FILE_70  FILE_8   FILE_89  FILE_98
FILE_17   FILE_26  FILE_35  FILE_44  FILE_53  FILE_62  FILE_71  FILE_80  FILE_9   FILE_99
[root@gfvm2 htime]# 
[root@gfvm2 htime]# gluster volume geo-replication tv1 gfvm2::tv2 status
 
MASTER NODE    MASTER VOL    MASTER BRICK                                                     SLAVE USER    SLAVE         SLAVE NODE    STATUS    CRAWL STATUS       LAST_SYNCED                  
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gfvm2          tv1           /run/gluster/snaps/92f4f42e13cc47e1ad44d5fdd860a2b9/brick1/b1    root          gfvm2::tv2    gfvm2         Active    Changelog Crawl    2015-07-23 18:47:55          
[root@gfvm2 htime]# 

================================================================================

Comment 15 Rahul Hinduja 2015-07-24 07:38:09 UTC
Why dont we go with the workaround mentioned in comment 7 instead of comment 11. Both the work around needs to be carried by user manually. But the manual effort with the new workaround suggested is enormous. 

Consider master and slave volume each with 6x2 (recommended). This means total bricks = 24 (12 Master + 12 Slave). And after snapshot creation their would be more 24 bricks of snapshot. So eventually we would need to backup htime from all the 24 bricks across cluster and once snapshot is restored we would need to copy the backedup htime to 24 snapshoted bricks across cluster.

Workaround in comment 7 is already tested and looks more user friendly then workaround in comment 11.

Comment 16 Aravinda VK 2015-07-24 07:54:57 UTC
(In reply to Rahul Hinduja from comment #15)
> Why dont we go with the workaround mentioned in comment 7 instead of comment
> 11. Both the work around needs to be carried by user manually. But the
> manual effort with the new workaround suggested is enormous. 
> 
> Consider master and slave volume each with 6x2 (recommended). This means
> total bricks = 24 (12 Master + 12 Slave). And after snapshot creation their
> would be more 24 bricks of snapshot. So eventually we would need to backup
> htime from all the 24 bricks across cluster and once snapshot is restored we
> would need to copy the backedup htime to 24 snapshoted bricks across cluster.
> 
> Workaround in comment 7 is already tested and looks more user friendly then
> workaround in comment 11.

Just a clarification. We need to run this step only for new bricks of Master Volume. Backing HTIME files is required since running sed command will alter the files, if something fails we need to copy it back.

New workaround is more safe for Geo-rep since it will not enter to Hybrid Crawl after snapshot restore compared to the earlier workaround. Also earlier workaround depends on Checkpoint but Checkpoint has issue with Hybrid Crawl.

if Master volume is 6x2, then this script to be run for 12 bricks. Running in slave requires only if it is Cascaded setup.

Comment 18 monti lawrence 2015-07-24 17:51:52 UTC
Doc text is edited. Please sign off to be included in Known Issues.

Comment 19 Kotresh HR 2015-07-27 04:48:37 UTC
Doc Text looks fine for above mentioned reasons in comment 16

Comment 20 Aravinda VK 2015-07-27 06:00:06 UTC
(In reply to monti lawrence from comment #18)
> Doc text is edited. Please sign off to be included in Known Issues.

Small change required in workaround steps.

1. Remove step 3 since all these steps are carried after Snapshot Restore.
2. Path in the example of Step 2 was wrong. It should be new brick path.
cp -a /opt/volume_test/tv_1/b1.1/.glusterf/changeslogs/htime  /opt/backup_htime/

Comment 21 Aravinda VK 2015-07-27 06:44:53 UTC
Updated doc text as per the comment 20. Also updated the brick paths in all examples.

Comment 22 Kotresh HR 2015-07-27 07:01:21 UTC
Updated DocText. Please check.

Comment 23 Rahul Hinduja 2015-07-27 08:45:18 UTC
Typo in DocText for step2. Should be .glusterfs instead of .glusterf

Current:

For example:
	cp -a /var/run/gluster/snaps/a4e2c4647cf642f68d0f8259b43494c0/brick0/b0/.glusterf/changeslogs/htime  /opt/backup_htime/brick0_b0

Should be:

For example:
	cp -a /var/run/gluster/snaps/a4e2c4647cf642f68d0f8259b43494c0/brick0/b0/.glusterfs/changeslogs/htime  /opt/backup_htime/brick0_b0


Did correction in DocText as well

Comment 24 Anjana Suparna Sriram 2015-07-28 03:11:19 UTC
Included the edited text.

Comment 26 Pan Ousley 2018-04-20 14:28:14 UTC
*** Bug 1569612 has been marked as a duplicate of this bug. ***

Comment 28 Atin Mukherjee 2018-11-01 17:53:37 UTC
What'd it take to get this addressed?

Comment 30 Atin Mukherjee 2019-06-07 05:23:14 UTC
*** Bug 1713233 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.