Bug 1025938 - Dist-geo-rep : geo-rep status detail has more number of skipped files than synced when lots of creates and deletes are performed.
Dist-geo-rep : geo-rep status detail has more number of skipped files than sy...
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
2.1
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
storage-qa-internal@redhat.com
status
: ZStream
Depends On:
Blocks: 1285202
  Show dependency treegraph
 
Reported: 2013-11-02 03:05 EDT by Vijaykumar Koppad
Modified: 2015-11-25 03:52 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1285202 (view as bug list)
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Vijaykumar Koppad 2013-11-02 03:05:30 EDT
Description of problem: geo-rep status detail has more number of skipped files than synced when lots of creates and deletes are performed on master.

status detail output
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
                                             MASTER: master  SLAVE: ssh://10.70.43.76::slave
 
NODE                         HEALTH    UPTIME      FILES SYNCD    FILES PENDING    BYTES PENDING    DELETES PENDING    TOTAL FILES SKIPPED   
------------------------------------------------------------------------------------------------------------------------------------------
redcell.blr.redhat.com       Stable    18:42:15    63702          0                0Bytes           0                  84770                 
redcloak.blr.redhat.com      Stable    18:42:11    0              0                0Bytes           0                  0                     
redwood.blr.redhat.com       Stable    18:42:11    0              0                0Bytes           0                  0                     
redlake.blr.redhat.com       Stable    18:42:11    69248          0                0Bytes           0                  78680 

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

If you check the server redcell, it has 63702 files synced but 84770 files skipped, and this number is huge. It shouldn't skip that many number of files. 

and also geo-rep logs have lot of rsync [error 23]

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
[2013-11-02 12:24:18.174379] W [master(/bricks/brick3):635:regjob] _GMaster: Rsync: .gfid/58a5f9ca-aa87-4bf0-9492-a11f8daaba58 [errcode: 23]
[2013-11-02 12:24:18.176805] W [master(/bricks/brick3):635:regjob] _GMaster: Rsync: .gfid/2289e28e-bad3-456c-beae-81e9ffd3939f [errcode: 23]
[2013-11-02 12:24:18.179011] W [master(/bricks/brick3):635:regjob] _GMaster: Rsync: .gfid/bfa90557-661a-41c1-a2e6-96d7440fb145 [errcode: 23]
[2013-11-02 12:24:18.181179] W [master(/bricks/brick3):635:regjob] _GMaster: Rsync: .gfid/5db6f196-88e4-401a-9591-4d701b79c71d [errcode: 23]
[2013-11-02 12:24:18.183270] W [master(/bricks/brick3):635:regjob] _GMaster: Rsync: .gfid/450c56ee-f939-4e1c-a1a0-053da05903dd [errcode: 23]
[2013-11-02 12:24:18.185342] W [master(/bricks/brick3):635:regjob] _GMaster: Rsync: .gfid/fee5f433-c54f-4f3a-b9bb-ec99ae0669a2 [errcode: 23]
[2013-11-02 12:24:18.187408] W [master(/bricks/brick3):635:regjob] _GMaster: Rsync: .gfid/16cb000c-5dca-4bf1-b840-ec115195df05 [errcode: 23]
[2013-11-02 12:24:18.190316] W [master(/bricks/brick3):635:regjob] _GMaster: Rsync: .gfid/ad27aaa2-be5a-427f-9220-666afcf2a123 [errcode: 23]
[2013-11-02 12:24:18.192959] W [master(/bricks/brick3):635:regjob] _GMaster: Rsync: .gfid/8a010d8a-80ce-489d-8447-685ca8f02a68 [errcode: 23]
[2013-11-02 12:24:18.195560] W [master(/bricks/brick3):635:regjob] _GMaster: Rsync: .gfid/c12b2e9b-e2fa-487a-9eeb-5c0e4441edd8 [errcode: 23]

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Version-Release number of selected component (if applicable):glusterfs-3.4.0.38rhs-1.el6rhs.x86_64


How reproducible: Didn't try to reproduce again. 


Steps to Reproduce:
1.create and start a geo-rep relationship between master and slave. 
2.run lots of create and delete operations on master mount point in a loop. 
3.Check the status detail out now and then and geo-rep log files. 

Actual results: geo-rep status detail has more number of skipped files than synced files.


Expected results: Ideally it shouldn't skip any file, even if it does for some strange reason, it shouldn't be that many. 


Additional info:
Comment 3 Aravinda VK 2015-11-25 03:51:45 EST
Closing this bug since RHGS 2.1 release reached EOL. Required bugs are cloned to RHGS 3.1. Please re-open this issue if found again.
Comment 4 Aravinda VK 2015-11-25 03:52:26 EST
Closing this bug since RHGS 2.1 release reached EOL. Required bugs are cloned to RHGS 3.1. Please re-open this issue if found again.

Note You need to log in before you can comment on or make changes to this bug.