The following was reported via IRC where the user had created a volume using /hekafs-export/www as the brick directory. Later, he removed that volume and used /hekafs-export as the brick directory. This led to an endless series of this error message: Dec 2 01:32:44 hakafs01 GlusterFS[17549]: [2011-12-02 01:32:44.661161] C [inode.c:232:__is_dentry_cyclic] 0-posix-acl-autoload/inode: detected cyclic loop formation during inode linkage. inode (1/00000000-0000-0000-0000-000000000001) linking under itself as www getfattr -m . -d -e hex /hekafs-export /hekafs-export/www getfattr: Entferne führenden '/' von absoluten Pfadnamen # file: hekafs-export security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a64656661756c745f743a733000 trusted.afr.plenty-client-2=0x000000000000000000000000 trusted.afr.plenty-client-3=0x000000000000000000000000 trusted.afr.plentyTest-client-2=0x000000000000000000000000 trusted.afr.plentyTest-client-3=0x000000000000000000000000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x00000001000000007fffffffffffffff trusted.glusterfs.test=0x776f726b696e6700 # file: hekafs-export/www security.selinux=0x73797374656d5f753a6f626a6563745f723a64656661756c745f743a733000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x00000001000000007fffffffffffffff trusted.glusterfs.test=0x776f726b696e6700 The user did not understand the error message, nor was he able to deduce from the error what needed to be done to correct the problem. It seems like this should be able to be self-corrected.
CHANGE: http://review.gluster.com/781 (extras: clean up a brick's gfid xattr) merged in master by Vijay Bellur (vijay)
CHANGE: http://review.gluster.com/2514 (extras: add check for brick path existence) merged in master by Vijay Bellur (vijay)
This patch doesn't seem to do any behavioural changes. Its just a script clear_xattr.sh in the glusterFS source which can be used to remove xattr from the bricks previously belong to some other volume. I think this should be documented properly. Moving it to DP .
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report. glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user