Bug 1083963 - Dist-geo-rep : after renames on master, there are more number of files on slave than master.
Summary: Dist-geo-rep : after renames on master, there are more number of files on sla...
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: pre-release
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-04-03 10:16 UTC by Vijaykumar Koppad
Modified: 2015-10-22 15:40 UTC (History)
4 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2015-10-22 15:40:20 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Vijaykumar Koppad 2014-04-03 10:16:27 UTC
Description of problem:
   After renames on master mount point, there are more number of the files on slave, than master. 

[root@localhost master]# ls level00 -l
total 26
dr-xr-xr-x. 3 47639 51862 8192 Apr  3 10:10 533d6d18%%43LKRD1XS2
-rwx--x---. 1 26910 38383 1614 Apr  3 10:10 533d6d18%%6MFU3W1J7G
-----w----. 1 27374 52807 1059 Apr  3 10:10 533d6d18%%HO49X59SHE
--w-rwxr--. 1  5782 24263 1956 Apr  3 10:10 533d6d18%%LR8V8VC07U
--wxrwx-wx. 1 25912 61088 1702 Apr  3 10:10 533d6d18%%SB49QBPPVM
-rw-rwx-w-. 1 17638 34647 2026 Apr  3 10:10 533d6d18%%YCZ8WKERFT
drwxr-xr-x. 3 root  root  8192 Apr  3 10:15 level10
[root@localhost slave]# ls level00/ -l
total 29
--w-rwxr--. 1  5782 24263 1956 Apr  3 10:10 533d6bbf%%DYMM0BDP8E
-rw-rwx-w-. 1 17638 34647 2026 Apr  3 10:10 533d6bbf%%G9VK5YJ1W4
-----w----. 0 27374 52807 1059 Apr  3 10:10 533d6bbf%%K9TX9IDQDB
---------T. 0 root  root     0 Apr  3 10:17 533d6bbf%%M5YSC8PMVJ
dr-xr-xr-x. 3 47639 51862 8192 Apr  3 10:10 533d6d18%%43LKRD1XS2
---------T. 0 root  root     0 Apr  3 10:17 533d6d18%%6MFU3W1J7G
-----w----. 0 27374 52807 1059 Apr  3 10:10 533d6d18%%HO49X59SHE
--w-rwxr--. 1  5782 24263 1956 Apr  3 10:10 533d6d18%%LR8V8VC07U
--wxrwx-wx. 1 25912 61088 1702 Apr  3 10:10 533d6d18%%SB49QBPPVM
-rw-rwx-w-. 1 17638 34647 2026 Apr  3 10:10 533d6d18%%YCZ8WKERFT
drwxr-xr-x. 3 root  root  8192 Apr  3 10:16 level10


Version-Release number of selected component (if applicable): master with head 0c1d78f5c52c69268ec3a1d8d5fcb1a1bf15f243


How reproducible: Didn't try to reproduce. 


Steps to Reproduce:
1. create and start master and slave volume on the same machine. 
2. create and start geo-rep between those machines. 
3. create some data on master, and let it sync .
4. Rename all the created data. 

Actual results: more number of files on slave than on master 


Expected results: Number of files on master and slave should be same. 


Additional info:

Got following trace-back on master geo-rep log file 

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
[2014-04-03 10:16:46.297456] E [repce(/d/backends/master2):188:__call__] RepceClient: call 15235:139643211298624:1396534606.26 (entry_ops) failed on peer with OSError
[2014-04-03 10:16:46.297976] E [syncdutils(/d/backends/master2):240:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
  File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 150, in main
    main_i()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 542, in main_i
    local.service_loop(*[r for r in [remote] if r])
  File "/usr/local/libexec/glusterfs/python/syncdaemon/resource.py", line 1177, in service_loop
    g2.crawlwrap()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/master.py", line 467, in crawlwrap
    self.crawl()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/master.py", line 1067, in crawl
    self.process(changes)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/master.py", line 825, in process
    self.process_change(change, done, retry)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/master.py", line 793, in process_change
    self.slave.server.entry_ops(entries)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 204, in __call__
    return self.ins(self.meth, *a)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 189, in __call__
    raise res
OSError: [Errno 39] Directory not empty
[2014-04-03 10:16:46.301832] I [syncdutils(/d/backends/master2):192:finalize] <top>: exiting.
[2014-04-03 10:16:46.318186] E [repce(/d/backends/master4):188:__call__] RepceClient: call 15239:140691526907712:1396534606.26 (entry_ops) failed on peer with OSError
[2014-04-03 10:16:46.318687] E [syncdutils(/d/backends/master4):240:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
  File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 150, in main
    main_i()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 542, in main_i
    local.service_loop(*[r for r in [remote] if r])
  File "/usr/local/libexec/glusterfs/python/syncdaemon/resource.py", line 1177, in service_loop
    g2.crawlwrap()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/master.py", line 467, in crawlwrap
    self.crawl()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/master.py", line 1067, in crawl
    self.process(changes)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/master.py", line 825, in process
    self.process_change(change, done, retry)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/master.py", line 793, in process_change
    self.slave.server.entry_ops(entries)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 204, in __call__
    return self.ins(self.meth, *a)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 189, in __call__
    raise res
OSError: [Errno 39] Directory not empty

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Comment 2 Kaleb KEITHLEY 2015-10-22 15:40:20 UTC
pre-release version is ambiguous and about to be removed as a choice.

If you believe this is still a bug, please change the status back to NEW and choose the appropriate, applicable version for it.


Note You need to log in before you can comment on or make changes to this bug.