Bug 858961 - DHT - after adding brick/s, layout for root directory should be fixed on lookup itself so files created at root level after lookup can be distributed to all sub-vols
DHT - after adding brick/s, layout for root directory should be fixed on look...
Status: CLOSED NOTABUG
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.0
x86_64 Linux
low Severity low
: ---
: ---
Assigned To: shishir gowda
amainkar
: FutureFeature
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-09-20 04:04 EDT by Rachana Patel
Modified: 2015-04-20 09:35 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-12-26 02:00:41 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Rachana Patel 2012-09-20 04:04:55 EDT
Description of problem:
DHT - after adding brick/s, layout for root directory should be fixed on lookup itself so files created at root level after lookup can be distributed to all sub-vols


Version-Release number of selected component (if applicable):
3.3.0rhs-28.el6rhs.x86_64

How reproducible:
always

Steps to Reproduce:
1. Create a Distributed volume having 2 or more sub-volume and start the volume.

[]# gluster volume status t1
Status of volume: t1
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick XXX:/home/t1				       24010	Y	26319
Brick XXX:/home/t2				        24011	Y	26324
NFS Server on localhost					38467	Y	26438
NFS Server on XXX                       		38467	Y	24117



2. Fuse Mount the volume from the client-1 using “mount -t glusterfs  server:/<volume> <client-1_mount_point>”

mount -t glusterfs XXX:/t1 /mnt/t1

3. From mount point create some files at root level.
cd /mnt/t1
touch files{1..20}

4.add brick/s in volume

5. from mount point execute ls command

6. create more files from mount point at root level
touch files{21..50}

7. check on subvoulmes that files are distributed to new sub-vols or not
[]# gluster volume status t1
Status of volume: t1
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick XXX:/home/t1				24010	Y	26319
Brick XXX:/home/t2				24011	Y	26324
Brick XXX:/home/t3				24012	Y	26426
Brick XXX:/home/t4			     	24013	Y	26432
NFS Server on localhost					38467	Y	26438
NFS Server on XXX				38467	Y	24117
 
[]# ls /home/t*
/home/t1:
files1   files15  files17  files20  files26  files34  files36  files38  files41  files43  files45  files48  files6
files10  files16  files2   files22  files33  files35  files37  files40  files42  files44  files46  files5

/home/t2:
files11  files13  files18  files21  files24  files27  files29  files30  files32  files4   files49  files7  files9
files12  files14  files19  files23  files25  files28  files3   files31  files39  files47  files50  files8

/home/t3:

/home/t4:


[~]# getfattr -d -m . -e hex /home/t3
getfattr: Removing leading '/' from absolute path names
# file: home/t3
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.volume-id=0x40e4bb5d6ddf45f5b55e88079a389196

[]# getfattr -d -m . -e hex /home/t4
getfattr: Removing leading '/' from absolute path names
# file: home/t4
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.volume-id=0x40e4bb5d6ddf45f5b55e88079a389196

  
Actual results:
files are not sistributed to newley added bricks

Expected results:
Once brick/s has been added succesfully, newly created files(at root level) should be distributed to new brick/s also. lookup should fix laypou for root dir.

Additional info:
Comment 1 Rachana Patel 2012-09-20 04:06:15 EDT
Actual results:
files are not distributed to newley added bricks
Comment 3 shishir gowda 2012-10-09 06:52:43 EDT
The work around is to run rebalance(fix-layout) which will fix the layouts.
Comment 4 shishir gowda 2012-12-26 02:00:41 EST
The current behaviour of dht of not re-writing the layouts of root after a add-brick is correct. If a user needs the layout to be re-written, then a rebalance(fix-layout option) need be issued through the cli.

Note You need to log in before you can comment on or make changes to this bug.