Bug 1288511 - Taking long time to promote/demote large number of files
Summary: Taking long time to promote/demote large number of files
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: hari gowtham
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard: tier-performance
Depends On:
Blocks: 1358583
TreeView+ depends on / blocked
 
Reported: 2015-12-04 13:11 UTC by RajeshReddy
Modified: 2018-11-08 18:27 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1358583 (view as bug list)
Environment:
Last Closed: 2018-11-08 18:27:00 UTC
Embargoed:


Attachments (Terms of Use)

Description RajeshReddy 2015-12-04 13:11:25 UTC
Description of problem:
============
Taking long time to promote/demote large number of files 

Version-Release number of selected component (if applicable):
==============
glusterfs-server-3.7.5-9

How reproducible:


Steps to Reproduce:
===========
1. Create 2x2 volume and then mount it on client using FUSE and create directory and then create 50k files using for i in {1..50000} do echo $i >> ./file$i
2. Attach 2x2 hot bricks to the volume and then create new directory and create around 20k files using for i in {1..50000} do echo $i >> ./file$i
3. Kill all the brick process and restart the volume using force option 
4. After 120s started counting number of files migrated from hot to cold and took 30 min to migrate around 5k files from hot to cold 

Actual results:


Expected results:


Additional info:
===========
[root@rhs-client19 test_tier-tier-dht]# gluster vol info test_tier 
 
Volume Name: test_tier
Type: Tier
Volume ID: 9bca8ffb-d47c-4636-95ab-2cfc58da422e
Status: Started
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick5/test_tier_hot4
Brick2: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick5/test_tier_hot4
Brick3: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick4/test_tier_hot3
Brick4: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick4/test_tier_hot3
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick5: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick7/test_tier_hot1
Brick6: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick7/test_tier_hot1
Brick7: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick6/test_tier_hot2
Brick8: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick6/test_tier_hot2
Options Reconfigured:
cluster.tier-mode: test
features.ctr-enabled: on
performance.readdir-ahead: on

Client name:vertigo.lab.eng.blr.redhat.com
Mount:/mnt/test_tier

sosreport available  @/home/repo/sosreports/bug.1288509 on rhsqe-repo.lab.eng.blr.redhat.com

Comment 3 RajeshReddy 2015-12-07 10:02:40 UTC
To simulate node down scenario, i killed all bricks and then started volume using force to bring back down brick process

Comment 4 RajeshReddy 2015-12-08 10:53:48 UTC
Able to reproduce the issue without Step no 3 ( Kill all the brick process and restart the volume using force option)

Comment 5 Nag Pavan Chilakam 2015-12-09 09:41:11 UTC
I have a setup with 8 nodes. I see that even though there are sufficient files to be demoted, only 400 files per node on an avg are getting demoted, in a span of one hour(have changed demote freq to 1hr) on 3.7.5-9

Comment 11 hari gowtham 2018-11-08 18:27:00 UTC
As tier is not being actively developed, I'm closing this bug. Feel free to open it if necessary.


Note You need to log in before you can comment on or make changes to this bug.