Bug 1025291 - Dist-geo-rep : geo-rep status detail shows wrong info of files synced for passive node, when active node goes down,
Dist-geo-rep : geo-rep status detail shows wrong info of files synced for pas...
Status: CLOSED CURRENTRELEASE
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
2.1
x86_64 Linux
medium Severity medium
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
storage-qa-internal@redhat.com
status
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-10-31 08:01 EDT by Vijaykumar Koppad
Modified: 2015-08-06 10:39 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-08-06 10:39:46 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Vijaykumar Koppad 2013-10-31 08:01:58 EDT
Description of problem:Dist-geo-rep : geo-rep status detail shows wrong info of files synced for passive node, when active node goes down.

status detail before bringing the active node down 
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
                                             MASTER: master  SLAVE: ssh://10.70.43.159::slave

NODE                          HEALTH    UPTIME      FILES SYNCD    FILES PENDING    BYTES PENDING    DELETES PENDING    TOTAL FILES SKIPPED
-------------------------------------------------------------------------------------------------------------------------------------------
shaktiman.blr.redhat.com      Stable    00:37:26    592            0                0Bytes           0                  0
targarean.blr.redhat.com      Stable    00:37:22    608            0                0Bytes           0                  0
snow.blr.redhat.com           Stable    00:37:22    0              0                0Bytes           0                  0
riverrun.blr.redhat.com       Stable    00:37:22    0              0                0Bytes           0                  0

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

No files were created after this, just brought down the active node targarean

status detail after the node down
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
                                             MASTER: master  SLAVE: ssh://10.70.43.159::slave

NODE                          HEALTH    UPTIME      FILES SYNCD    FILES PENDING    BYTES PENDING    DELETES PENDING    TOTAL FILES SKIPPED
-------------------------------------------------------------------------------------------------------------------------------------------
shaktiman.blr.redhat.com      Stable    00:41:17    592            0                0Bytes           0                  0
snow.blr.redhat.com           Stable    00:41:13    1216           0                0Bytes           0                  0
riverrun.blr.redhat.com       Stable    00:41:13    0              0                0Bytes           0                  0

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
snow and targarean are replica pairs.

volume info, 

Volume Name: master
Type: Distributed-Replicate
Volume ID: 8f42dabe-1f56-41c1-920c-0de95d625809
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.58:/bricks/brick1
Brick2: 10.70.43.63:/bricks/brick2
Brick3: 10.70.43.108:/bricks/brick3
Brick4: 10.70.43.158:/bricks/brick4
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on


Version-Release number of selected component (if applicable):glusterfs-3.4.0.37rhs-1.el6rhs.x86_64


How reproducible: Didn't try to reproduce 


Steps to Reproduce:
1.create and start geo-rep relationship between master and slave. 
2.create and sync some files from master and slave.
3.check the status detail,
4.bring down one of the active replica pairs. 
5. Check the status detail. 

Actual results: wrong status detail for files synced 


Expected results: It should give proper number of files synced. 


Additional info:
Comment 3 Aravinda VK 2015-08-06 10:39:46 EDT
files synced column is removed from status output in RHGS 3.1. If we introduce percistent store while working on RFE 988857, we can show number of Files synced. But with the existing limitation, this column is removed since it will mislead the user. 

Current status shows ENTRY, DATA and METADATA as three separate columns. Thease values will get reset whenever Geo-rep worker restarted. 

Closing this bug for the same reason. Please reopen this bug if the issue is found again.

Note You need to log in before you can comment on or make changes to this bug.