Bug 764737 (GLUSTER-3005) - trusted.glusterfs.dht not set on 3rd replica
Summary: trusted.glusterfs.dht not set on 3rd replica
Keywords:
Status: CLOSED NOTABUG
Alias: GLUSTER-3005
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.0.5
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Kaushal
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-06-08 22:03 UTC by Joe Julian
Modified: 2011-09-22 13:26 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: ---
Mount Type: fuse
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Joe Julian 2011-06-08 19:05:18 UTC
I manually set trusted.glusterfs.dht on the root directories of the 3rd replicas for all my bricks to be identical to the other two copies and have not experienced the "Stale NFS file handle" issue since.

Comment 1 Joe Julian 2011-06-08 22:03:12 UTC
This actually on 3.0.8, not available for selection.

On several new volumes, trusted.glusterfs.dht was not written to the root of them, causing the fuse client to eventually enter a perpetual loop, repeating:

[2011-06-08 12:18:53] D [dht-layout.c:649:dht_layout_dir_mismatch] uspaweb-dht: / - disk layout missing
[2011-06-08 12:18:53] D [dht-common.c:274:dht_revalidate_cbk] uspaweb-dht: mismatching layouts for /

until unmounted.

This would cause the user to receive the "Stale NFS file handle" error message when accessing that mount.

I rolled further back to 3.0.7 before I discovered where the problem was coming from. Here's the full tracelog of the client:
http://joejulian.name/tracelog-3.0.7-dht_layout_dir_mismatch.log

I realize that you've stated in another bug report that the gfid change in 3.1+ obsoletes this xattr, but the fact that it's not replicating to the 3rd replica may still be significant in the current tree.

Comment 2 Kaushal 2011-09-22 06:29:37 UTC
Hi Joe.

I couldn't reproduce this.
Created a dist-rep setup with 6/9 bricks with replica=3 using gluster 3.0.8 built from source and tested several times. Every time the 3rd replica gets the trusted.glusterfs.dht gets set on the root dir of the brick.

Can you confirm this still exists. And if it does can you give a test case for reproducing this.

Thanks.

Comment 3 Amar Tumballi 2011-09-22 06:40:06 UTC
Joe,

Are you still using 3.0.x series? We would like to test mostly 3.2.x or master branch for any of the bugs as of now. (In some cases 3.1.x may be valid). Let us know if it is ok to close this bug considering 3.0.x is not actively worked on. I will make sure we will add this to the test case to test master.

Comment 4 Joe Julian 2011-09-22 10:26:07 UTC
I no longer have a 3.0 installation to test against. I'll go ahead and close this.


Note You need to log in before you can comment on or make changes to this bug.