Bug 1224100 - [geo-rep]: Even after successful sync, the DATA counter did not reset to 0
Summary: [geo-rep]: Even after successful sync, the DATA counter did not reset to 0
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: 3.7.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Aravinda VK
QA Contact:
URL:
Whiteboard:
Depends On: 1224098
Blocks: glusterfs-3.7.1 1223636 1223695
TreeView+ depends on / blocked
 
Reported: 2015-05-22 08:28 UTC by Aravinda VK
Modified: 2015-06-02 06:20 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.7.1
Doc Type: Bug Fix
Doc Text:
Clone Of: 1224098
Environment:
Last Closed: 2015-06-02 06:20:52 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Aravinda VK 2015-05-22 08:28:55 UTC
+++ This bug was initially created as a clone of Bug #1224098 +++

+++ This bug was initially created as a clone of Bug #1223695 +++

Description of problem:
=======================

The purpose of DATA counter in "status detail" is to provide information about the pending que to sync. Once the sync is successful, the counter should reset to 0. Which is not happening.

[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status detail
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                  SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED            ENTRY    DATA    META    FAILURES    CHECKPOINT TIME        CHECKPOINT COMPLETED    CHECKPOINT COMPLETION TIME   
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
georep1        master        /rhs/brick1/b1    root          10.70.46.154::slave    10.70.46.101    Passive    N/A                N/A                    N/A      N/A     N/A     N/A         N/A                    N/A                     N/A                          
georep1        master        /rhs/brick2/b2    root          10.70.46.154::slave    10.70.46.101    Passive    N/A                N/A                    N/A      N/A     N/A     N/A         N/A                    N/A                     N/A                          
georep3        master        /rhs/brick1/b1    root          10.70.46.154::slave    10.70.46.154    Active     Changelog Crawl    2015-05-21 14:03:50    0        377     0       0           2015-05-21 14:32:54    No                      N/A                          
georep3        master        /rhs/brick2/b2    root          10.70.46.154::slave    10.70.46.154    Active     Changelog Crawl    2015-05-21 14:32:20    0        372     0       0           2015-05-21 14:32:54    No                      N/A                          
georep2        master        /rhs/brick1/b1    root          10.70.46.154::slave    10.70.46.103    Passive    N/A                N/A                    N/A      N/A     N/A     N/A         N/A                    N/A                     N/A                          
georep2        master        /rhs/brick2/b2    root          10.70.46.154::slave    10.70.46.103    Passive    N/A                N/A                    N/A      N/A     N/A     N/A         N/A                    N/A                     N/A                          
[root@georep1 scripts]# 


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.0


How reproducible:
=================
2/2


Steps to Reproduce:
===================
1. Create and Start master volume
2. Create and Start slave volume
3. Create and Start meta volume
4. Create and Start geo-rep between master and slave
5. Mount the master and slave volume
6. Create files/directories on the master volume.
7. Execute status detail command from master node. You will observe the increment in the entry and data counter. 
8. Let the sync complete.
9. Calculate checksum of master and slave volume to confirm that the sync is completed.
10. Once sync complete, check the status detail again.

Actual results:
===============

The entry counter is reset to 0, but data counter is still has values like 377


Expected results:
=================

All the counters should reset to 0, indicating that nothing is pending to sync.


Additional info:
=================

Arequal infor for master and slave


[root@wingo master]# /root/scripts/arequal-checksum -p /mnt/master

Entry counts
Regular files   : 519
Directories     : 140
Symbolic links  : 114
Other           : 0
Total           : 773

Metadata checksums
Regular files   : 47e250
Directories     : 3e9
Symbolic links  : 3e9
Other           : 3e9

Checksums
Regular files   : 4f4af7ac217c3da67e7270a056d2fba
Directories     : 356e0d5141064d2c
Symbolic links  : 7313722a0c5b0a7b
Other           : 0
Total           : ed0afdd694c554b
[root@wingo master]# 


[root@wingo slave]# /root/scripts/arequal-checksum -p /mnt/slave

Entry counts
Regular files   : 519
Directories     : 140
Symbolic links  : 114
Other           : 0
Total           : 773

Metadata checksums
Regular files   : 47e250
Directories     : 3e9
Symbolic links  : 3e9
Other           : 3e9

Checksums
Regular files   : 4f4af7ac217c3da67e7270a056d2fba
Directories     : 356e0d5141064d2c
Symbolic links  : 7313722a0c5b0a7b
Other           : 0
Total           : ed0afdd694c554b
[root@wingo slave]#

Comment 1 Anand Avati 2015-05-26 08:50:52 UTC
REVIEW: http://review.gluster.org/10912 (geo-rep: Fix Data counter issue in status) posted (#1) for review on release-3.7 by Aravinda VK (avishwan)

Comment 2 Anand Avati 2015-05-29 07:37:37 UTC
REVIEW: http://review.gluster.org/10912 (geo-rep: Fix Data counter issue in status) posted (#2) for review on release-3.7 by Aravinda VK (avishwan)


Note You need to log in before you can comment on or make changes to this bug.