Bug 761845 (GLUSTER-113) - mkdir fails on stripe configuration
Summary: mkdir fails on stripe configuration
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: GLUSTER-113
Product: GlusterFS
Classification: Community
Component: stripe
Version: mainline
Hardware: All
OS: Linux
low
medium
Target Milestone: ---
Assignee: Amar Tumballi
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-07-07 07:16 UTC by Sachidananda Urs
Modified: 2013-12-19 00:03 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: RTA
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Sachidananda Urs 2009-07-07 04:19:42 UTC
Additional details can be found in the share drive: /share/tickets/113

Comment 1 Amar Tumballi 2009-07-07 06:01:36 UTC
by looking at the logs, I see that /rsync itself doesn't exists on any bricks, hence the mkdirs within it failed. Can you confirm the user who is creating directories has proper permissions in the top level directory?

Comment 2 Basavanagowda Kanur 2009-07-07 06:17:31 UTC
(In reply to comment #2)
> by looking at the logs, I see that /rsync itself doesn't exists on any bricks,
> hence the mkdirs within it failed. Can you confirm the user who is creating
> directories has proper permissions in the top level directory?


user who was creating directory: root (uid = 0)

mkdir(rsync) also failed with EEXISTS, when the directory was not to be seen on the mount-point or on any of the back-end export directories.

--
Gowda

Comment 3 Sachidananda Urs 2009-07-07 07:16:42 UTC
Steps:
1. create directory rsync on the mount point.
2. rsync /usr on the mount point.
3. rm -rf mountpoint/rsync
4. mkdir rsync: <fails> no such file or directory

Find the attached logs for server and client at 

http://dev.gluster.com/~sac/client.2.0.3rc2.stripe.log
http://dev.gluster.com/~sac/server.brick1.2.0.3rc2.stripe.log

Comment 4 Amar Tumballi 2009-07-07 18:18:19 UTC
(In reply to comment #1)
> Additional details can be found in the share drive: /share/tickets/113

I tried to reproduce the error, and noticed that error I get is EBUSY

[root@client01 sac]# /share/sac/scripts/rsync_test.sh
===== 0 =====
===== 1 =====
===== 2 =====
===== 3 =====
===== 4 =====
rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(260) [sender=2.6.8]
rsync: writefd_unbuffered failed to write 96 bytes [generator]: Broken pipe (32)
rsync error: error in rsync protocol data stream (code 12) at io.c(1119) [generator=2.6.8]
[root@client01 sac]# rm -rf rsync/
[root@client01 sac]# /share/sac/scripts/rsync_test.sh 
mkdir: cannot create directory `rsync': Device or resource busy


Couldn't get the problem of EEXIST.

Regards,

Comment 5 Amar Tumballi 2009-11-11 17:43:23 UTC
this problem was due to EBUSY issues came from fuse. they went away with inode gen support. Closing the ticket, will be creating new tickets with issues of stripe (if any :p)


Note You need to log in before you can comment on or make changes to this bug.