Bug 1223695 - [geo-rep]: Even after successful sync, the DATA counter did not reset to 0
Summary: [geo-rep]: Even after successful sync, the DATA counter did not reset to 0
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: RHGS 3.1.0
Assignee: Aravinda VK
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On: 1224098 1224100
Blocks: 1202842 1223636
TreeView+ depends on / blocked
 
Reported: 2015-05-21 09:15 UTC by Rahul Hinduja
Modified: 2015-07-29 04:44 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.7.1-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1224098 (view as bug list)
Environment:
Last Closed: 2015-07-29 04:44:21 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Rahul Hinduja 2015-05-21 09:15:02 UTC
Description of problem:
=======================

The purpose of DATA counter in "status detail" is to provide information about the pending que to sync. Once the sync is successful, the counter should reset to 0. Which is not happening.

[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status detail
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                  SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED            ENTRY    DATA    META    FAILURES    CHECKPOINT TIME        CHECKPOINT COMPLETED    CHECKPOINT COMPLETION TIME   
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
georep1        master        /rhs/brick1/b1    root          10.70.46.154::slave    10.70.46.101    Passive    N/A                N/A                    N/A      N/A     N/A     N/A         N/A                    N/A                     N/A                          
georep1        master        /rhs/brick2/b2    root          10.70.46.154::slave    10.70.46.101    Passive    N/A                N/A                    N/A      N/A     N/A     N/A         N/A                    N/A                     N/A                          
georep3        master        /rhs/brick1/b1    root          10.70.46.154::slave    10.70.46.154    Active     Changelog Crawl    2015-05-21 14:03:50    0        377     0       0           2015-05-21 14:32:54    No                      N/A                          
georep3        master        /rhs/brick2/b2    root          10.70.46.154::slave    10.70.46.154    Active     Changelog Crawl    2015-05-21 14:32:20    0        372     0       0           2015-05-21 14:32:54    No                      N/A                          
georep2        master        /rhs/brick1/b1    root          10.70.46.154::slave    10.70.46.103    Passive    N/A                N/A                    N/A      N/A     N/A     N/A         N/A                    N/A                     N/A                          
georep2        master        /rhs/brick2/b2    root          10.70.46.154::slave    10.70.46.103    Passive    N/A                N/A                    N/A      N/A     N/A     N/A         N/A                    N/A                     N/A                          
[root@georep1 scripts]# 


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.0-2.el6rhs.x86_64


How reproducible:
=================
2/2


Steps to Reproduce:
===================
1. Create and Start master volume
2. Create and Start slave volume
3. Create and Start meta volume
4. Create and Start geo-rep between master and slave
5. Mount the master and slave volume
6. Create files/directories on the master volume.
7. Execute status detail command from master node. You will observe the increment in the entry and data counter. 
8. Let the sync complete.
9. Calculate checksum of master and slave volume to confirm that the sync is completed.
10. Once sync complete, check the status detail again.

Actual results:
===============

The entry counter is reset to 0, but data counter is still has values like 377


Expected results:
=================

All the counters should reset to 0, indicating that nothing is pending to sync.


Additional info:
=================

Arequal infor for master and slave


[root@wingo master]# /root/scripts/arequal-checksum -p /mnt/master

Entry counts
Regular files   : 519
Directories     : 140
Symbolic links  : 114
Other           : 0
Total           : 773

Metadata checksums
Regular files   : 47e250
Directories     : 3e9
Symbolic links  : 3e9
Other           : 3e9

Checksums
Regular files   : 4f4af7ac217c3da67e7270a056d2fba
Directories     : 356e0d5141064d2c
Symbolic links  : 7313722a0c5b0a7b
Other           : 0
Total           : ed0afdd694c554b
[root@wingo master]# 


[root@wingo slave]# /root/scripts/arequal-checksum -p /mnt/slave

Entry counts
Regular files   : 519
Directories     : 140
Symbolic links  : 114
Other           : 0
Total           : 773

Metadata checksums
Regular files   : 47e250
Directories     : 3e9
Symbolic links  : 3e9
Other           : 3e9

Checksums
Regular files   : 4f4af7ac217c3da67e7270a056d2fba
Directories     : 356e0d5141064d2c
Symbolic links  : 7313722a0c5b0a7b
Other           : 0
Total           : ed0afdd694c554b
[root@wingo slave]#

Comment 3 Aravinda VK 2015-05-26 08:51:55 UTC
Upstream patch sent:
mainline:    http://review.gluster.org/#/c/10911/
release-3.7: http://review.gluster.org/#/c/10912/

Comment 6 Rahul Hinduja 2015-07-06 18:09:25 UTC
Verified with build: glusterfs-3.7.1-7.el6rhs.x86_64

Upon successful sync to slave. DATA counter reset to 0.

Moving the bug to verified state.

Comment 8 errata-xmlrpc 2015-07-29 04:44:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.