Bug 763211 (GLUSTER-1479)

Summary: files are not distributed to server 3 and 4.
Product: [Community] GlusterFS Reporter: Lakshmipathi G <lakshmipathi>
Component: distributeAssignee: Amar Tumballi <amarts>
Status: CLOSED WORKSFORME QA Contact:
Severity: high Docs Contact:
Priority: low    
Version: 3.1-alphaCC: gluster-bugs, vijay, vraman
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: RTP Mount Type: fuse
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Lakshmipathi G 2010-08-30 14:07:59 UTC
Started glusterfsd with 4 servers with dht,
all files created using the command 
for i in {1..500};do touch $i.txt; done 

goes to server 1 and server 2 whereas server 3 and 4 has no files.

client process state dump-

[xlator.cluster.dht.dd4-dht.priv]
xlator.cluster.dht.dd4-dht.priv.subvolume_cnt=4
xlator.cluster.dht.dd4-dht.priv.subvolumes[0]=protocol/client.dd4-client-0
xlator.cluster.dht.dd4-dht.priv.file_layouts[0].cnt=1
xlator.cluster.dht.dd4-dht.priv.file_layouts[0].preset=1
xlator.cluster.dht.dd4-dht.priv.file_layouts[0].gen=0
xlator.cluster.dht.dd4-dht.priv.file_layouts[0].type=0
xlator.cluster.dht.dd4-dht.priv.file_layouts[0].list[0].err=0
xlator.cluster.dht.dd4-dht.priv.file_layouts[0].list[0].start=0
xlator.cluster.dht.dd4-dht.priv.file_layouts[0].list[0].stop=0
xlator.cluster.dht.dd4-dht.priv.file_layouts[0].list[0].xlator.type=protocol/client
xlator.cluster.dht.dd4-dht.priv.file_layouts[0].list[0].xlator.name=dd4-client-0
xlator.cluster.dht.dd4-dht.priv.subvolume_status[0]=1
xlator.cluster.dht.dd4-dht.priv.subvolumes[1]=protocol/client.dd4-client-1
xlator.cluster.dht.dd4-dht.priv.file_layouts[1].cnt=1
xlator.cluster.dht.dd4-dht.priv.file_layouts[1].preset=1
xlator.cluster.dht.dd4-dht.priv.file_layouts[1].gen=0
xlator.cluster.dht.dd4-dht.priv.file_layouts[1].type=0
xlator.cluster.dht.dd4-dht.priv.file_layouts[1].list[0].err=0
xlator.cluster.dht.dd4-dht.priv.file_layouts[1].list[0].start=0
xlator.cluster.dht.dd4-dht.priv.file_layouts[1].list[0].stop=0
xlator.cluster.dht.dd4-dht.priv.file_layouts[1].list[0].xlator.type=protocol/client
xlator.cluster.dht.dd4-dht.priv.file_layouts[1].list[0].xlator.name=dd4-client-1
xlator.cluster.dht.dd4-dht.priv.subvolume_status[1]=1
xlator.cluster.dht.dd4-dht.priv.subvolumes[2]=protocol/client.dd4-client-2
xlator.cluster.dht.dd4-dht.priv.file_layouts[2].cnt=1
xlator.cluster.dht.dd4-dht.priv.file_layouts[2].preset=1
xlator.cluster.dht.dd4-dht.priv.file_layouts[2].gen=0
xlator.cluster.dht.dd4-dht.priv.file_layouts[2].type=0
xlator.cluster.dht.dd4-dht.priv.file_layouts[2].list[0].err=0
xlator.cluster.dht.dd4-dht.priv.file_layouts[2].list[0].start=0
xlator.cluster.dht.dd4-dht.priv.file_layouts[2].list[0].stop=0
xlator.cluster.dht.dd4-dht.priv.file_layouts[2].list[0].xlator.type=protocol/client
xlator.cluster.dht.dd4-dht.priv.file_layouts[2].list[0].xlator.name=dd4-client-2
xlator.cluster.dht.dd4-dht.priv.subvolume_status[2]=1

Comment 1 Amar Tumballi 2010-09-01 03:58:32 UTC
Hi Lakshmi,

Can you run below command and let me know the output?


bash# getfattr -n trusted.glusterfs.pathinfo <$dir>

where $dir is directory where files are created

Comment 2 Lakshmipathi G 2010-09-01 07:12:37 UTC
Hi Amar,
tested with 3.1.0qa13 ( and with latest git) -now  files are distributed across fours servers properly.

Comment 3 Amar Tumballi 2010-09-01 09:15:14 UTC
This could have happened in one case:

* this is a fresh gluster mount (ie, export directories are freshly created)
* server 1 and 2 connection established
* lookup on root comes from fuse, succeeds.
* server 3 and 4 gets connected after lookup..

in this case, distribute will write layout only on server 1 and 2, causing 3 and 4 to be idle.

In future this can be fixed by running 'glusterfs-defrag <mount-point>' on mountpoint, or if its a volume created/started from gluster cli, one can use 'gluster volume rebalance <VOLNAME> start'

Resolving.