Bug 856455

Summary: DHT- getfattr on directory(from mountpoint) gives error Transport endpoint is not connected, if cached sub-vol is down
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Rachana Patel <racpatel>
Component: glusterfsAssignee: Venky Shankar <vshankar>
Status: CLOSED ERRATA QA Contact: amainkar
Severity: medium Docs Contact:
Priority: medium    
Version: 2.0CC: rhs-bugs, vbellur, vshankar
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.4.0.4rhs-1 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-09-23 22:33:22 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
mount-log none

Description Rachana Patel 2012-09-12 04:56:50 UTC
Description of problem:
DHT- getfattr ondirectory(from mountpoint) gives error Transport endpoint is not connected, in case of brick down. even though dir is hashed to some other sub-volume which is up
Version-Release number of selected component (if applicable):


How reproducible:
always

Steps to Reproduce:
1. Create a Distributed volume having 3 or more sub-volumes on multiple server and start that volume.

[root@Rhs3 ~]# gluster volume info test
 
Volume Name: test
Type: Distribute
Volume ID: da498fa8-68fc-4de5-a682-07f26eb14c17
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: XX1:/home/test
Brick2: XX2:/home/test
Brick3: XX3:/home/test


2. Fuse Mount the volume from the client-1 using “mount -t glusterfs  server:/<volume> <client-1_mount_point>”

3. From mount point create some dirs and files inside it

4. Find the hash value for dir name and also find its hashed sub volume

[]# getfattr -m . -n trusted.glusterfs.pathinfo /mnt/test
getfattr: Removing leading '/' from absolute path names
# file: /mnt/test
trusted.glusterfs.pathinfo="((<DISTRIBUTE:test-dht> <POSIX(/home/test):Rhs2:/home/test/>) (test-dht-layout (test-client-2 1431655765 2863311529) (test-client-0 2863311530 4294967295) (test-client-1 0 1431655764)))"

hash value for dir 'd10' is 4b423224 
so it should hash to sub-vol 'test-client-1'

5.bring another sub-vol down (hashed sub volume should be up and make cached sub-vol down)


[root@Rhs3 test]# gluster volume status test
Status of volume: test
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick XX1:/home/test				24009	Y	11463
Brick XX2:/home/test				24009	Y	16287
Brick XX3:/home/test				24009	N	12116
NFS Server on localhost				38467	Y	16293
NFS Server on XXX				38467	Y	16005
NFS Server on xxx				38467	Y	15892

6.from mount point execute getfattr on dir d10
[]# getfattr -m . -n trusted.glusterfs.pathinfo d10
d10: trusted.glusterfs.pathinfo: Transport endpoint is not connected
  
Actual results:
It is only looking at, first cached sub-vol and when it is down, It gives error ' Transport endpoint is not connected'

Expected results:
Directories are created on all sub-vol, so in any case(hashed sub-vol down or cached or any othe sub-vol down), it can get and display xattr value from any other sub-vol

Additional info:

Comment 2 Rachana Patel 2012-09-12 05:25:45 UTC
Created attachment 611991 [details]
mount-log

Comment 3 Vijay Bellur 2013-02-09 03:10:00 UTC
CHANGE: http://review.gluster.org/4047 (cluster/dht: pathinfo xattr changes for directories) merged in master by Anand Avati (avati)

Comment 4 Rachana Patel 2013-03-14 09:21:46 UTC
Even-though all brick is up, right now 
"getfattr -m . -n trusted.glusterfs.pathinfo <dir>" shows entry from one sub-volume only, it should show entry from all sub-volume.

Comment 5 Rachana Patel 2013-05-09 10:34:17 UTC
verified on with 3.3.4.0.4rhs-1.el6rhs.x86_64
not able to reproduce, working as per expected result.

so changing status to verified

Comment 6 Scott Haines 2013-09-23 22:33:22 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html