Bug 761903 (GLUSTER-171) - fuse_loc_fill() failed on XFS volumes
Summary: fuse_loc_fill() failed on XFS volumes
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: GLUSTER-171
Product: GlusterFS
Classification: Community
Component: distribute
Version: 2.0.4
Hardware: i386
OS: Linux
low
medium
Target Milestone: ---
Assignee: Anand Avati
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-07-28 19:28 UTC by Olexander Shtepa
Modified: 2015-09-01 23:04 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: RTNR
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Anand Avati 2009-07-28 16:34:58 UTC
Is this a 32bit system?

Comment 1 Olexander Shtepa 2009-07-28 19:28:25 UTC
I have tried to use XFS file system for bricks, but got problems:
1. cluster/unify auto-heal does not work on XFS volumes.
2. cluster/distribute can't use new empty XFS bricks.
After reformatting XFS volumes to ext3 cluster/distribute works.

System is up to date CentOS 5.3.
glusterfs-server-2.0.4-1.el5.centos.el5
glusterfs-common-2.0.4-1.el5.centos.el5
glusterfs-client-2.0.4-1.el5.centos.el5
fuse-2.7.4-1.el5.rf
dkms-fuse-2.7.4-1.nodist.rf

How to reproduce:
Vol file:
 volume v1
   type storage/posix
   option directory /data/1
 end-volume

 volume v2
   type storage/posix
   option directory /data/2
 end-volume
 
 volume distribute
   type cluster/distribute
   option lookup-unhashed yes
   option min-free-disk 1%
   subvolumes v1 v2
 end-volume

Directory /data is XFS mounted volume:
 /dev/md4 on /data type xfs (rw,noatime,nodiratime)

Initially file hierarchy:
/data/1
/data/1/somedir
/data/1/somedir/somefile    # content is '123\n'
/data/2
I.e. directory /data/2 is empty, /data/1 has 1 directory with one file. They don't have any extended attribute.

Mount GlusterFS:
# /usr/sbin/glusterfs --log-level=DEBUG --volfile=/etc/glusterfs/glusterfs-test.vol /mnt/test

Results:
# cd /mnt/test
# ls -l
total 0
drwxr-xr-x 2 root root 27 Jul 28 19:48 somedir
# ls -l somedir
ls: somedir: No such file or directory

Files extended attributes after this:
# getfattr -d -m - /data/1 /data/1/somedir /data/1/somedir/somefile /data/2 /data/2/somedir
getfattr: Removing leading '/' from absolute path names
# file: data/1
trusted.glusterfs.dht=0sAAAAAQAAAAAAAAAAf////g==
trusted.glusterfs.test="working\000"

# file: data/1/somedir
trusted.glusterfs.dht=0sAAAAAQAAAAAAAAAA/////w==

# file: data/2
trusted.glusterfs.dht=0sAAAAAQAAAAB//////////w==
trusted.glusterfs.test="working\000"

From log:
[2009-07-28 20:20:03] N [glusterfsd.c:1224:main] glusterfs: Successfully started
[2009-07-28 20:20:17] D [dht-layout.c:489:dht_layout_normalize] distribute: directory / looked up first time
[2009-07-28 20:20:17] D [dht-common.c:161:dht_lookup_dir_cbk] distribute: fixing assignment on /
[2009-07-28 20:20:20] D [dht-common.c:397:dht_lookup_everywhere_cbk] distribute: found on v1 directory /somedir
[2009-07-28 20:20:20] D [dht-common.c:113:dht_lookup_dir_cbk] distribute: lookup of /somedir on v2 returned error (No such file or directory)
[2009-07-28 20:20:20] D [dht-layout.c:489:dht_layout_normalize] distribute: directory /somedir looked up first time
[2009-07-28 20:20:20] D [dht-layout.c:507:dht_layout_normalize] distribute: path=/somedir err=No such file or directory on subvol=v2
[2009-07-28 20:20:20] D [dht-common.c:161:dht_lookup_dir_cbk] distribute: fixing assignment on /somedir
[2009-07-28 20:20:20] D [dht-common.c:68:dht_lookup_selfheal_cbk] distribute: could not find hashed subvolume for /somedir
[2009-07-28 20:20:28] D [fuse-bridge.c:279:fuse_loc_fill] fuse-bridge: failed to search parent for 0/(null) (1073765340)
[2009-07-28 20:20:28] W [fuse-bridge.c:1655:fuse_opendir] glusterfs-fuse: 15: OPENDIR (null) (fuse_loc_fill() failed)

Comment 2 Olexander Shtepa 2009-07-29 03:55:42 UTC
> Is this a 32bit system?

Yes.

Comment 3 Anand Avati 2009-07-29 20:46:33 UTC
(In reply to comment #2)
> > Is this a 32bit system?
> 
> Yes.

This is a known limitation in fuse. Please refer the following mailing list thread which talks about the same issue -

http://lists.gnu.org/archive/html/gluster-devel/2009-07/msg00136.html

Thanks,
Avati

Comment 4 Olexander Shtepa 2009-07-30 04:22:27 UTC
I see there is problem with inodes numbers > unsigned long max (32 bit).
I have done some test and found that problem have with inodes > signed long max.

First test (inode < unsigned long max, but > signed long max).
Empty directories:
# find /data -ls
1764732355    0 drwxr-xr-x   2 root     root            6 Jul 30 07:54 /data/1
4161929683    0 drwxr-xr-x   2 root     root            6 Jul 30 07:54 /data/2

Result:
# cd /mnt/test                                              
# mkdir somedir                                             
# echo "123" > somedir/somefile                             
bash: somedir/somefile: No such file or directory                             
# find . -ls                                                
     1    8 drwxr-xr-x   3 root     root           40 Jul 30 07:57 .                                                  
1926026069    0 drwxr-xr-x   2 root     root           12 Jul 30 07:57 ./somedir                                      
find: ./somedir: No such file or directory                                                                            

Second test (inode < signed long max).
# find /data -ls
1260203581    0 drwxr-xr-x   2 root     root            6 Jul 30 08:01 /data/1
711819743    0 drwxr-xr-x   2 root     root            6 Jul 30 08:00 /data/2

Result:
# cd /mnt/test
# mkdir somedir
# echo "123" > somedir/somefile
# find . -ls
     1    8 drwxr-xr-x   3 root     root           40 Jul 30 08:02 .
3233613583    0 drwxr-xr-x   2 root     root           27 Jul 30 08:02 ./somedir
3530204072    4 -rw-r--r--   1 root     root            4 Jul 30 08:02 ./somedir/somefile

Comment 5 Olexander Shtepa 2009-08-03 09:17:10 UTC
I have dome some extra test. I have created ext3 volume with more then 2^31 inodes:

# tune2fs -l /dev/sdb5 |grep "^Inode c"
Inode count:              2500645952
# mount |grep sdb5
/dev/sdb5 on /mnt/sdb5 type ext3 (rw)

Create 2 directories with big inode number:
# find /mnt/sdb5 -ls
     2    4 drwxr-xr-x   4 root     root         4096 Aug  3 13:12 /mnt/sdb5
2444231553    4 drwxr-xr-x   2 root     root         4096 Aug  3 13:11 /mnt/sdb5/2
2446455425    4 drwxr-xr-x   2 root     root         4096 Aug  3 13:11 /mnt/sdb5/1

Vol file:
 volume v1
   type storage/posix
   option directory /mnt/sdb5/1
 end-volume

 volume v2
   type storage/posix
   option directory /mnt/sdb5/2
 end-volume

 volume distribute
   type cluster/distribute
   option lookup-unhashed yes
   option min-free-disk 1%
   subvolumes v1 v2
 end-volume

# /usr/sbin/glusterfs --log-level=DEBUG --volfile=/etc/glusterfs/glusterfs-test.vol /mnt/test
# cd /mnt/test
# mkdir somedir
# find . ls
.
./somedir
find: ./somedir: No such file or directory
find: ls: No such file or directory

From log:
[2009-08-03 13:15:01] D [dht-layout.c:489:dht_layout_normalize] distribute: directory / looked up first time
[2009-08-03 13:15:01] D [dht-common.c:161:dht_lookup_dir_cbk] distribute: fixing assignment on /
[2009-08-03 13:15:14] D [fuse-bridge.c:279:fuse_loc_fill] fuse-bridge: failed to search parent for 0/(null) (593495813)
[2009-08-03 13:15:14] W [fuse-bridge.c:1655:fuse_opendir] glusterfs-fuse: 16: OPENDIR (null) (fuse_loc_fill() failed)

So this is not only XFS problem.

Comment 6 Anand Avati 2009-11-12 12:42:50 UTC
3.0.x releases have this fixed with the new scheme of using nodeid in fuse module

Comment 7 Olexander Shtepa 2009-11-13 04:39:25 UTC
It is a good news, thanks.


Note You need to log in before you can comment on or make changes to this bug.