Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1223695 - [geo-rep]: Even after successful sync, the DATA counter did not reset to 0
[geo-rep]: Even after successful sync, the DATA counter did not reset to 0
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
3.1
x86_64 Linux
medium Severity medium
: ---
: RHGS 3.1.0
Assigned To: Aravinda VK
Rahul Hinduja
:
Depends On: 1224098 1224100
Blocks: 1202842 1223636
  Show dependency treegraph
 
Reported: 2015-05-21 05:15 EDT by Rahul Hinduja
Modified: 2015-07-29 00:44 EDT (History)
8 users (show)

See Also:
Fixed In Version: glusterfs-3.7.1-1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1224098 (view as bug list)
Environment:
Last Closed: 2015-07-29 00:44:21 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 04:26:26 EDT

  None (edit)
Description Rahul Hinduja 2015-05-21 05:15:02 EDT
Description of problem:
=======================

The purpose of DATA counter in "status detail" is to provide information about the pending que to sync. Once the sync is successful, the counter should reset to 0. Which is not happening.

[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status detail
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                  SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED            ENTRY    DATA    META    FAILURES    CHECKPOINT TIME        CHECKPOINT COMPLETED    CHECKPOINT COMPLETION TIME   
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
georep1        master        /rhs/brick1/b1    root          10.70.46.154::slave    10.70.46.101    Passive    N/A                N/A                    N/A      N/A     N/A     N/A         N/A                    N/A                     N/A                          
georep1        master        /rhs/brick2/b2    root          10.70.46.154::slave    10.70.46.101    Passive    N/A                N/A                    N/A      N/A     N/A     N/A         N/A                    N/A                     N/A                          
georep3        master        /rhs/brick1/b1    root          10.70.46.154::slave    10.70.46.154    Active     Changelog Crawl    2015-05-21 14:03:50    0        377     0       0           2015-05-21 14:32:54    No                      N/A                          
georep3        master        /rhs/brick2/b2    root          10.70.46.154::slave    10.70.46.154    Active     Changelog Crawl    2015-05-21 14:32:20    0        372     0       0           2015-05-21 14:32:54    No                      N/A                          
georep2        master        /rhs/brick1/b1    root          10.70.46.154::slave    10.70.46.103    Passive    N/A                N/A                    N/A      N/A     N/A     N/A         N/A                    N/A                     N/A                          
georep2        master        /rhs/brick2/b2    root          10.70.46.154::slave    10.70.46.103    Passive    N/A                N/A                    N/A      N/A     N/A     N/A         N/A                    N/A                     N/A                          
[root@georep1 scripts]# 


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.0-2.el6rhs.x86_64


How reproducible:
=================
2/2


Steps to Reproduce:
===================
1. Create and Start master volume
2. Create and Start slave volume
3. Create and Start meta volume
4. Create and Start geo-rep between master and slave
5. Mount the master and slave volume
6. Create files/directories on the master volume.
7. Execute status detail command from master node. You will observe the increment in the entry and data counter. 
8. Let the sync complete.
9. Calculate checksum of master and slave volume to confirm that the sync is completed.
10. Once sync complete, check the status detail again.

Actual results:
===============

The entry counter is reset to 0, but data counter is still has values like 377


Expected results:
=================

All the counters should reset to 0, indicating that nothing is pending to sync.


Additional info:
=================

Arequal infor for master and slave


[root@wingo master]# /root/scripts/arequal-checksum -p /mnt/master

Entry counts
Regular files   : 519
Directories     : 140
Symbolic links  : 114
Other           : 0
Total           : 773

Metadata checksums
Regular files   : 47e250
Directories     : 3e9
Symbolic links  : 3e9
Other           : 3e9

Checksums
Regular files   : 4f4af7ac217c3da67e7270a056d2fba
Directories     : 356e0d5141064d2c
Symbolic links  : 7313722a0c5b0a7b
Other           : 0
Total           : ed0afdd694c554b
[root@wingo master]# 


[root@wingo slave]# /root/scripts/arequal-checksum -p /mnt/slave

Entry counts
Regular files   : 519
Directories     : 140
Symbolic links  : 114
Other           : 0
Total           : 773

Metadata checksums
Regular files   : 47e250
Directories     : 3e9
Symbolic links  : 3e9
Other           : 3e9

Checksums
Regular files   : 4f4af7ac217c3da67e7270a056d2fba
Directories     : 356e0d5141064d2c
Symbolic links  : 7313722a0c5b0a7b
Other           : 0
Total           : ed0afdd694c554b
[root@wingo slave]#
Comment 3 Aravinda VK 2015-05-26 04:51:55 EDT
Upstream patch sent:
mainline:    http://review.gluster.org/#/c/10911/
release-3.7: http://review.gluster.org/#/c/10912/
Comment 6 Rahul Hinduja 2015-07-06 14:09:25 EDT
Verified with build: glusterfs-3.7.1-7.el6rhs.x86_64

Upon successful sync to slave. DATA counter reset to 0.

Moving the bug to verified state.
Comment 8 errata-xmlrpc 2015-07-29 00:44:21 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html

Note You need to log in before you can comment on or make changes to this bug.