Bug 1033813

Summary: Quota: Performance degrades when writing to many subdirectories.
Product: Red Hat Gluster Storage Reporter: Ben Turner <bturner>
Component: quotaAssignee: Vijaikumar Mallikarjuna <vmallika>
Status: CLOSED DEFERRED QA Contact: Ben Turner <bturner>
Severity: high Docs Contact:
Priority: high    
Version: 2.1CC: rhs-bugs, smohan, storage-qa-internal, vbellur, vmallika
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1282720 (view as bug list) Environment:
Last Closed: 2015-11-17 09:00:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 1282720    

Description Ben Turner 2013-11-23 03:12:17 UTC
Description of problem:

I created a really deep directory structure for some tests:

DIR="/1/2/3/4/5/6/7/8/9/0/1/2/3/4/5/6/7/8/9/0/1/2/3/4/5/6/7/8/9/0/1/2/3/4/5/6/7/8/9/0/1/2/3/4/5/6/7/8/9/0/1/2/3/4/5/6/7/8/9/0/1/2/3/4/5/6/7/8/9/0/1/2/3/4/5/6/7/8/9/0/1/2/3/4/5/6/7/8/9/0/1/2/3/4/5/6/7/8/9/0"

mkdir -p /mnt${DIR}

Then I write/read a file:

[root@gqas015 images]# dd if=/dev/zero of=.$DIR/test bs=1024k count=1000 conv=sync
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 6.77399 s, 155 MB/s
[root@gqas015 images]# echo 3 > /proc/sys/vm/drop_caches
[root@gqas015 images]# dd if=.$DIR/test of=/dev/null bs=1024k
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 2.04755 s, 512 MB/s

Single threaded sequential write speed is much lower when writing to deep directories that to writing to less deep directories:

[root@gqas015 images]# dd if=/dev/zero of=./test bs=1024k count=1000 conv=sync
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 2.29749 s, 456 MB/s
[root@gqas015 images]# echo 3 > /proc/sys/vm/drop_caches
[root@gqas015 images]# dd if=./test of=/dev/null bs=1024k
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 2.09654 s, 500 MB/s

[root@gqas011 brick1]# gluster volume info
 
Volume Name: vmstore
Type: Distributed-Replicate
Volume ID: 9e0ce050-a590-4c6a-8986-8655d91fd0a6
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 192.168.199.1:/bricks/brick1
Brick2: 192.168.199.2:/bricks/brick1
Brick3: 192.168.199.1:/bricks/brick2
Brick4: 192.168.199.2:/bricks/brick2
Brick5: 192.168.199.1:/bricks/brick3
Brick6: 192.168.199.2:/bricks/brick3
Brick7: 192.168.199.1:/bricks/brick4
Brick8: 192.168.199.2:/bricks/brick4
Options Reconfigured:
features.quota: on
[root@gqas011 brick1]# gluster volume quota vmstore list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          1.0TB       80%      50.8GB 973.2GB

Version-Release number of selected component (if applicable):

[root@gqas011 brick1]# rpm -q glusterfs
glusterfs-3.4.0.44rhs-1.el6rhs.x86_64

How reproducible:

Every time.

Steps to Reproduce:
1. Set a quota
2. Create a deep directory structure
3. Write a file in the new dir

Actual results:

Performance writing to in many subdirectories is about 33% of what it is writing to root.

Expected results:

Similar performance numbers.

Additional info:

Comment 1 Ben Turner 2013-11-23 03:15:22 UTC
Here are some perf numbers for comparison:

No quota root:

[root@gqas015 images]# dd if=/dev/zero of=./test bs=1024k count=1000 conv=sync
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 2.15569 s, 486 MB/s
[root@gqas015 images]# echo 3 > /proc/sys/vm/drop_caches
[root@gqas015 images]# dd if=./test of=/dev/null bs=1024k
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 2.14813 s, 488 MB/s

No quota sub:

[root@gqas015 images]# dd if=/dev/zero of=.$DIR/test bs=1024k count=1000 conv=sync
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 2.31553 s, 453 MB/s
[root@gqas015 images]# echo 3 > /proc/sys/vm/drop_caches
[root@gqas015 images]# dd if=./test of=/dev/null bs=1024k
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 2.09389 s, 501 MB/s

Comment 3 Vijaikumar Mallikarjuna 2015-11-17 09:00:43 UTC
As 2.1 is EOL'ed, closing this bug and filed 3.1 bug# 1282720