Description of problem:Dist-geo-rep : geo-rep status detail shows wrong info of files synced for passive node, when active node goes down. status detail before bringing the active node down >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> MASTER: master SLAVE: ssh://10.70.43.159::slave NODE HEALTH UPTIME FILES SYNCD FILES PENDING BYTES PENDING DELETES PENDING TOTAL FILES SKIPPED ------------------------------------------------------------------------------------------------------------------------------------------- shaktiman.blr.redhat.com Stable 00:37:26 592 0 0Bytes 0 0 targarean.blr.redhat.com Stable 00:37:22 608 0 0Bytes 0 0 snow.blr.redhat.com Stable 00:37:22 0 0 0Bytes 0 0 riverrun.blr.redhat.com Stable 00:37:22 0 0 0Bytes 0 0 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> No files were created after this, just brought down the active node targarean status detail after the node down >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> MASTER: master SLAVE: ssh://10.70.43.159::slave NODE HEALTH UPTIME FILES SYNCD FILES PENDING BYTES PENDING DELETES PENDING TOTAL FILES SKIPPED ------------------------------------------------------------------------------------------------------------------------------------------- shaktiman.blr.redhat.com Stable 00:41:17 592 0 0Bytes 0 0 snow.blr.redhat.com Stable 00:41:13 1216 0 0Bytes 0 0 riverrun.blr.redhat.com Stable 00:41:13 0 0 0Bytes 0 0 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> snow and targarean are replica pairs. volume info, Volume Name: master Type: Distributed-Replicate Volume ID: 8f42dabe-1f56-41c1-920c-0de95d625809 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.43.58:/bricks/brick1 Brick2: 10.70.43.63:/bricks/brick2 Brick3: 10.70.43.108:/bricks/brick3 Brick4: 10.70.43.158:/bricks/brick4 Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on Version-Release number of selected component (if applicable):glusterfs-3.4.0.37rhs-1.el6rhs.x86_64 How reproducible: Didn't try to reproduce Steps to Reproduce: 1.create and start geo-rep relationship between master and slave. 2.create and sync some files from master and slave. 3.check the status detail, 4.bring down one of the active replica pairs. 5. Check the status detail. Actual results: wrong status detail for files synced Expected results: It should give proper number of files synced. Additional info:
files synced column is removed from status output in RHGS 3.1. If we introduce percistent store while working on RFE 988857, we can show number of Files synced. But with the existing limitation, this column is removed since it will mislead the user. Current status shows ENTRY, DATA and METADATA as three separate columns. Thease values will get reset whenever Geo-rep worker restarted. Closing this bug for the same reason. Please reopen this bug if the issue is found again.