Description of problem: ======================= The purpose of DATA counter in "status detail" is to provide information about the pending que to sync. Once the sync is successful, the counter should reset to 0. Which is not happening. [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status detail MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ENTRY DATA META FAILURES CHECKPOINT TIME CHECKPOINT COMPLETED CHECKPOINT COMPLETION TIME --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- georep1 master /rhs/brick1/b1 root 10.70.46.154::slave 10.70.46.101 Passive N/A N/A N/A N/A N/A N/A N/A N/A N/A georep1 master /rhs/brick2/b2 root 10.70.46.154::slave 10.70.46.101 Passive N/A N/A N/A N/A N/A N/A N/A N/A N/A georep3 master /rhs/brick1/b1 root 10.70.46.154::slave 10.70.46.154 Active Changelog Crawl 2015-05-21 14:03:50 0 377 0 0 2015-05-21 14:32:54 No N/A georep3 master /rhs/brick2/b2 root 10.70.46.154::slave 10.70.46.154 Active Changelog Crawl 2015-05-21 14:32:20 0 372 0 0 2015-05-21 14:32:54 No N/A georep2 master /rhs/brick1/b1 root 10.70.46.154::slave 10.70.46.103 Passive N/A N/A N/A N/A N/A N/A N/A N/A N/A georep2 master /rhs/brick2/b2 root 10.70.46.154::slave 10.70.46.103 Passive N/A N/A N/A N/A N/A N/A N/A N/A N/A [root@georep1 scripts]# Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.7.0-2.el6rhs.x86_64 How reproducible: ================= 2/2 Steps to Reproduce: =================== 1. Create and Start master volume 2. Create and Start slave volume 3. Create and Start meta volume 4. Create and Start geo-rep between master and slave 5. Mount the master and slave volume 6. Create files/directories on the master volume. 7. Execute status detail command from master node. You will observe the increment in the entry and data counter. 8. Let the sync complete. 9. Calculate checksum of master and slave volume to confirm that the sync is completed. 10. Once sync complete, check the status detail again. Actual results: =============== The entry counter is reset to 0, but data counter is still has values like 377 Expected results: ================= All the counters should reset to 0, indicating that nothing is pending to sync. Additional info: ================= Arequal infor for master and slave [root@wingo master]# /root/scripts/arequal-checksum -p /mnt/master Entry counts Regular files : 519 Directories : 140 Symbolic links : 114 Other : 0 Total : 773 Metadata checksums Regular files : 47e250 Directories : 3e9 Symbolic links : 3e9 Other : 3e9 Checksums Regular files : 4f4af7ac217c3da67e7270a056d2fba Directories : 356e0d5141064d2c Symbolic links : 7313722a0c5b0a7b Other : 0 Total : ed0afdd694c554b [root@wingo master]# [root@wingo slave]# /root/scripts/arequal-checksum -p /mnt/slave Entry counts Regular files : 519 Directories : 140 Symbolic links : 114 Other : 0 Total : 773 Metadata checksums Regular files : 47e250 Directories : 3e9 Symbolic links : 3e9 Other : 3e9 Checksums Regular files : 4f4af7ac217c3da67e7270a056d2fba Directories : 356e0d5141064d2c Symbolic links : 7313722a0c5b0a7b Other : 0 Total : ed0afdd694c554b [root@wingo slave]#
Upstream patch sent: mainline: http://review.gluster.org/#/c/10911/ release-3.7: http://review.gluster.org/#/c/10912/
Patches: master: http://review.gluster.org/#/c/10911/ release-3.7: http://review.gluster.org/10912 downstream: https://code.engineering.redhat.com/gerrit/#/c/49673/
Verified with build: glusterfs-3.7.1-7.el6rhs.x86_64 Upon successful sync to slave. DATA counter reset to 0. Moving the bug to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html