Hide Forgot
Additional details can be found in the share drive: /share/tickets/113
by looking at the logs, I see that /rsync itself doesn't exists on any bricks, hence the mkdirs within it failed. Can you confirm the user who is creating directories has proper permissions in the top level directory?
(In reply to comment #2) > by looking at the logs, I see that /rsync itself doesn't exists on any bricks, > hence the mkdirs within it failed. Can you confirm the user who is creating > directories has proper permissions in the top level directory? user who was creating directory: root (uid = 0) mkdir(rsync) also failed with EEXISTS, when the directory was not to be seen on the mount-point or on any of the back-end export directories. -- Gowda
Steps: 1. create directory rsync on the mount point. 2. rsync /usr on the mount point. 3. rm -rf mountpoint/rsync 4. mkdir rsync: <fails> no such file or directory Find the attached logs for server and client at http://dev.gluster.com/~sac/client.2.0.3rc2.stripe.log http://dev.gluster.com/~sac/server.brick1.2.0.3rc2.stripe.log
(In reply to comment #1) > Additional details can be found in the share drive: /share/tickets/113 I tried to reproduce the error, and noticed that error I get is EBUSY [root@client01 sac]# /share/sac/scripts/rsync_test.sh ===== 0 ===== ===== 1 ===== ===== 2 ===== ===== 3 ===== ===== 4 ===== rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(260) [sender=2.6.8] rsync: writefd_unbuffered failed to write 96 bytes [generator]: Broken pipe (32) rsync error: error in rsync protocol data stream (code 12) at io.c(1119) [generator=2.6.8] [root@client01 sac]# rm -rf rsync/ [root@client01 sac]# /share/sac/scripts/rsync_test.sh mkdir: cannot create directory `rsync': Device or resource busy Couldn't get the problem of EEXIST. Regards,
this problem was due to EBUSY issues came from fuse. they went away with inode gen support. Closing the ticket, will be creating new tickets with issues of stripe (if any :p)