Bug 1001980 - Dist-geo-rep : geo-rep created entry for 2 files with same name in same directory on slave after creating the hardlinks on master.
Dist-geo-rep : geo-rep created entry for 2 files with same name in same dire...
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
x86_64 Linux
high Severity high
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
Sudhir D
: ZStream
Depends On: 984603
  Show dependency treegraph
Reported: 2013-08-28 04:58 EDT by Vijaykumar Koppad
Modified: 2014-08-24 20:50 EDT (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2013-09-25 04:45:56 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Vijaykumar Koppad 2013-08-28 04:58:07 EDT
Description of problem: After creating hardlinks to all the files on master which were synced to slave, geo-rep created 2 files with same name in  same directory on the slave. 

On master one of the gsyncd had traceback like 

[2013-08-28 12:32:32.902442] E [repce(/bricks/brick3):188:__call__] RepceClient: call 28210:140139662915328:137767335
0.78 (entry_ops) failed on peer with OSError
[2013-08-28 12:32:32.903069] E [syncdutils(/bricks/brick3):206:log_raise_exception] <top>: FAIL: 
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 133, in main
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 513, in main_i
    local.service_loop(*[r for r in [remote] if r])
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1062, in service_loop
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 369, in crawlwrap
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 783, in crawl
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 744, in process
    if self.process_change(change, done, retry):
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 724, in process_change
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 204, in __call__
    return self.ins(self.meth, *a)
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 189, in __call__
    raise res
OSError: [Errno 22] Invalid argument
[2013-08-28 12:32:32.905826] I [syncdutils(/bricks/brick3):158:finalize] <top>: exiting.
[2013-08-28 12:32:32.915518] I [monitor(monitor):81:set_state] Monitor: new state: faulty


on slave it had traceback like

[2013-08-28 12:05:42.456854] I [resource(slave):630:service_loop] GLUSTER: slave listening
[2013-08-28 12:32:31.633302] E [repce(slave):103:worker] <top>: call failed: 
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 99, in worker
    res = getattr(self.obj, rmeth)(*in_data[2:])
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 522, in entry_ops
    errno_wrap(os.link, [slink, entry], [ENOENT, EEXIST])
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 381, in errno_wrap
    return call(*arg)
OSError: [Errno 22] Invalid argument
[2013-08-28 12:32:31.645050] I [repce(slave):78:service_loop] RepceServer: terminating on reaching EOF.


Version-Release number of selected component (if applicable):glusterfs-

How reproducible:Didn't try to reproduce again

Steps to Reproduce:
1.create and start a geo-rep relationship between master and slave. 
2.create files on the master using command, ./crefi.py -n 10 --multi  -b 10 -d 10 --random --max=500K --min=10 /mnt/master/
3.let it sync to slave
4. now create hardlinks to all those files on the master, ./crefi.py -n 10 --multi  -b 10 -d 10 --random --max=500K --min=10 --fop=hardlink /mnt/master/

Actual results:it created 2 files with same name in same directory on the slave.

Expected results: It should sync all files to slave properly. 

Additional info:
Comment 2 Vijaykumar Koppad 2013-08-31 09:30:22 EDT
This happened again in the build glusterfs-, in cascaded-fanout setup .
Comment 3 Amar Tumballi 2013-09-11 09:29:52 EDT
Considering bug 1001498 is fixed, can we see if this is an issue anymore?
Comment 4 Vijaykumar Koppad 2013-09-12 02:42:35 EDT
I haven't seen this issue again after the build, glusterfs- I had raised this bug because the symptoms were different.
Comment 5 Vivek Agarwal 2013-09-25 04:45:56 EDT
As per comment 4, closing this.

Note You need to log in before you can comment on or make changes to this bug.