Bug 1033813 - Quota: Performance degrades when writing to many subdirectories.
Summary: Quota: Performance degrades when writing to many subdirectories.
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: quota
Version: 2.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Vijaikumar Mallikarjuna
QA Contact: Ben Turner
URL:
Whiteboard:
Depends On:
Blocks: 1282720
TreeView+ depends on / blocked
 
Reported: 2013-11-23 03:12 UTC by Ben Turner
Modified: 2016-09-17 12:37 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1282720 (view as bug list)
Environment:
Last Closed: 2015-11-17 09:00:43 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Ben Turner 2013-11-23 03:12:17 UTC
Description of problem:

I created a really deep directory structure for some tests:

DIR="/1/2/3/4/5/6/7/8/9/0/1/2/3/4/5/6/7/8/9/0/1/2/3/4/5/6/7/8/9/0/1/2/3/4/5/6/7/8/9/0/1/2/3/4/5/6/7/8/9/0/1/2/3/4/5/6/7/8/9/0/1/2/3/4/5/6/7/8/9/0/1/2/3/4/5/6/7/8/9/0/1/2/3/4/5/6/7/8/9/0/1/2/3/4/5/6/7/8/9/0"

mkdir -p /mnt${DIR}

Then I write/read a file:

[root@gqas015 images]# dd if=/dev/zero of=.$DIR/test bs=1024k count=1000 conv=sync
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 6.77399 s, 155 MB/s
[root@gqas015 images]# echo 3 > /proc/sys/vm/drop_caches
[root@gqas015 images]# dd if=.$DIR/test of=/dev/null bs=1024k
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 2.04755 s, 512 MB/s

Single threaded sequential write speed is much lower when writing to deep directories that to writing to less deep directories:

[root@gqas015 images]# dd if=/dev/zero of=./test bs=1024k count=1000 conv=sync
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 2.29749 s, 456 MB/s
[root@gqas015 images]# echo 3 > /proc/sys/vm/drop_caches
[root@gqas015 images]# dd if=./test of=/dev/null bs=1024k
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 2.09654 s, 500 MB/s

[root@gqas011 brick1]# gluster volume info
 
Volume Name: vmstore
Type: Distributed-Replicate
Volume ID: 9e0ce050-a590-4c6a-8986-8655d91fd0a6
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 192.168.199.1:/bricks/brick1
Brick2: 192.168.199.2:/bricks/brick1
Brick3: 192.168.199.1:/bricks/brick2
Brick4: 192.168.199.2:/bricks/brick2
Brick5: 192.168.199.1:/bricks/brick3
Brick6: 192.168.199.2:/bricks/brick3
Brick7: 192.168.199.1:/bricks/brick4
Brick8: 192.168.199.2:/bricks/brick4
Options Reconfigured:
features.quota: on
[root@gqas011 brick1]# gluster volume quota vmstore list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          1.0TB       80%      50.8GB 973.2GB

Version-Release number of selected component (if applicable):

[root@gqas011 brick1]# rpm -q glusterfs
glusterfs-3.4.0.44rhs-1.el6rhs.x86_64

How reproducible:

Every time.

Steps to Reproduce:
1. Set a quota
2. Create a deep directory structure
3. Write a file in the new dir

Actual results:

Performance writing to in many subdirectories is about 33% of what it is writing to root.

Expected results:

Similar performance numbers.

Additional info:

Comment 1 Ben Turner 2013-11-23 03:15:22 UTC
Here are some perf numbers for comparison:

No quota root:

[root@gqas015 images]# dd if=/dev/zero of=./test bs=1024k count=1000 conv=sync
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 2.15569 s, 486 MB/s
[root@gqas015 images]# echo 3 > /proc/sys/vm/drop_caches
[root@gqas015 images]# dd if=./test of=/dev/null bs=1024k
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 2.14813 s, 488 MB/s

No quota sub:

[root@gqas015 images]# dd if=/dev/zero of=.$DIR/test bs=1024k count=1000 conv=sync
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 2.31553 s, 453 MB/s
[root@gqas015 images]# echo 3 > /proc/sys/vm/drop_caches
[root@gqas015 images]# dd if=./test of=/dev/null bs=1024k
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 2.09389 s, 501 MB/s

Comment 3 Vijaikumar Mallikarjuna 2015-11-17 09:00:43 UTC
As 2.1 is EOL'ed, closing this bug and filed 3.1 bug# 1282720


Note You need to log in before you can comment on or make changes to this bug.