Bug 1279799 - Files are not distributed to hot tier based on brick size during promotion and new file creation
Files are not distributed to hot tier based on brick size during promotion an...
Status: CLOSED WORKSFORME
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: tier (Show other bugs)
3.1
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Dan Lambright
nchilaka
: ZStream
Depends On:
Blocks: 1268895
  Show dependency treegraph
 
Reported: 2015-11-10 05:52 EST by RajeshReddy
Modified: 2016-09-17 11:37 EDT (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Known Issue
Doc Text:
The distributed hashing algorithm does not distribute files across the bricks and sub-volumes that make up the hot tier based on the size of the volumes. Instead, it fills a sub-volume until the minimum free disk threshold is reached, at which point another sub-volume will start being filled.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-02-15 06:46:58 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description RajeshReddy 2015-11-10 05:52:44 EST
Description of problem:
====================
Files are not distributed to hot tier based on brick size during promotion and new file creation  

Version-Release number of selected component (if applicable):
=============
glusterfs-server-3.7.5-5


How reproducible:


Steps to Reproduce:
=============
1. Create distributed replication (2x2) volume and attach 4 tiered bricks 
2. Mount it on client and create directory and create around 6k files 
3. Though attached 500GB and 600 GB bricks files are evenly distributed to all the hot tier, even during the promotion too files are not distributed based on the brick size

Actual results:


Expected results:
============
File distribution should happen based brick size 


Additional info:
============
[root@rhs-client18 data]# gluster vol info disrep_tier 
 
Volume Name: disrep_tier
Type: Tier
Volume ID: ea4bd2c2-efd3-4d25-bbc1-8f6d9c75dafc
Status: Started
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick5/tier
Brick2: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick5/tier
Brick3: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick6/tier
Brick4: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick6/tier
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick5: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick7/disrep_teri
Brick6: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick7/disrep_teri
Brick7: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick6/disrep_teri
Brick8: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick6/disrep_teri
Options Reconfigured:
features.ctr-enabled: on
performance.readdir-ahead: on


DHT range (Hot tier)

[root@rhs-client18 data]# getfattr -d -m . -e hex /rhs/brick6/tier/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick6/tier/
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000007455dc34ffffffff
trusted.glusterfs.volume-id=0xea4bd2c2efd34d25bbc18f6d9c75dafc
trusted.tier.tier-dht=0x00000001000000008aa0e9b8ffffffff
trusted.tier.tier-dht.commithash=0x3239383732333738373600

[root@rhs-client18 data]# getfattr -d -m . -e hex /rhs/brick5/tier/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick5/tier/
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.disrep_tier-client-7=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000007455dc33
trusted.glusterfs.volume-id=0xea4bd2c2efd34d25bbc18f6d9c75dafc
trusted.tier.tier-dht=0x00000001000000008aa0e9b8ffffffff
trusted.tier.tier-dht.commithash=0x3239383732333738373600


DHT range (cold)

[root@rhs-client18 data]# getfattr -d -m . -e hex /rhs/brick7/disrep_teri
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick7/disrep_teri
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000000000000089ce1415
trusted.glusterfs.volume-id=0xea4bd2c2efd34d25bbc18f6d9c75dafc
trusted.tier.tier-dht=0x0000000100000000000000008aa0e9b7
trusted.tier.tier-dht.commithash=0x3239383732333738373600

[root@rhs-client18 data]# getfattr -d -m . -e hex /rhs/brick6/disrep_teri
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick6/disrep_teri
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.disrep_tier-client-3=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000089ce1416ffffffff
trusted.glusterfs.volume-id=0xea4bd2c2efd34d25bbc18f6d9c75dafc
trusted.tier.tier-dht=0x0000000100000000000000008aa0e9b7
trusted.tier.tier-dht.commithash=0x3239383732333738373600
Comment 3 RajeshReddy 2015-11-12 00:45:33 EST
I created 2400 files (file1 to file2400) 

[root@rhs-client18 ~]# ls -lrth /rhs/brick5/tier/bug | wc -l
1109
[root@rhs-client18 ~]# ls -lrth /rhs/brick6/tier/bug | wc -l
1294
[root@rhs-client18 ~]#
Comment 8 Joseph Elwin Fernandes 2016-02-09 10:58:53 EST
DHT Doesnt distribute files on brick size or subvol size to be correct.
Until and unless the on of the subvol is full (min free disc is hit), in that case files will be distributed to other free sub vols. This same behavior is for for promotion or new file creation in tiering.
Comment 13 RajeshReddy 2016-02-15 06:46:58 EST
Tested with glusterfs-rdma-3.7.5-19 build and distribution is happening based on the brick size so marking this bug as closed

Note You need to log in before you can comment on or make changes to this bug.