Bug 648632 - ext4: writeback performance fixes
ext4: writeback performance fixes
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel (Show other bugs)
6.1
Unspecified Unspecified
low Severity medium
: rc
: ---
Assigned To: Eric Sandeen
Eryu Guan
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-11-01 15:36 EDT by Eric Sandeen
Modified: 2011-05-23 16:27 EDT (History)
2 users (show)

See Also:
Fixed In Version: kernel-2.6.32-92.el6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2011-05-23 16:27:54 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Eric Sandeen 2010-11-01 15:36:36 EDT
Description of problem:

2 bugs in current ext4 writeback are solved with upstream patches:

From: Eric Sandeen <sandeen@redhat.com>
Date: Thu, 28 Oct 2010 01:30:03 +0000 (-0400)
Subject: ext4: don't bump up LONG_MAX nr_to_write by a factor of 8
X-Git-Tag: v2.6.37-rc1~76^2^2~44
X-Git-Url: http://git.kernel.org/?p=linux%2Fkernel%2Fgit%2Ftorvalds%2Flinux-2.6.git;a=commitdiff_plain;h=b443e7339aa08574d30b0819b344618459c76214

ext4: don't bump up LONG_MAX nr_to_write by a factor of 8

I'm uneasy with lots of stuff going on in ext4_da_writepages(),
but bumping nr_to_write from LLONG_MAX to -8 clearly isn't
making anything better, so avoid the multiplier in that case.

Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
---

From: Eric Sandeen <sandeen@redhat.com>
Date: Thu, 28 Oct 2010 01:30:03 +0000 (-0400)
Subject: ext4: stop looping in ext4_num_dirty_pages when max_pages reached
X-Git-Tag: v2.6.37-rc1~76^2^2~45
X-Git-Url: http://git.kernel.org/?p=linux%2Fkernel%2Fgit%2Ftorvalds%2Flinux-2.6.git;a=commitdiff_plain;h=659c6009ca2e3a01acc9881bafe5f55ef09c965b

ext4: stop looping in ext4_num_dirty_pages when max_pages reached

Today we simply break out of the inner loop when we have accumulated
max_pages; this keeps scanning forward and doing pagevec_lookup_tag()
in the while (!done) loop, this does potentially a lot of work
with no net effect.

When we have accumulated max_pages, just clean up and return.

Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
---

The 2nd commit was reported as a "50% performance regression" on the list for a user doing very large streaming buffered IOs on a machine with a lot of memory. The time spent scanning was significant.
Comment 1 RHEL Product and Program Management 2010-11-01 15:39:53 EDT
This request was evaluated by Red Hat Product Management for inclusion
in a Red Hat Enterprise Linux maintenance release. Product Management has 
requested further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed 
products. This request is not yet committed for inclusion in an Update release.
Comment 3 Zhang Kexin 2010-11-22 21:24:28 EST
Hi Eric,
Any guidance on how to reproduce this bug?
Thanks!
Comment 4 Eric Sandeen 2010-12-03 10:30:30 EST
On a box with a lot of memory, try doing a very long buffered IO.

The original reporter did:

2.6.31 disk write performance (RAID5 with 8 disks):

i7test7% dd if=/dev/zero of=/i7raid/bill/testfile1 bs=1M count=32768
32768+0 records in
32768+0 records out
34359738368 bytes (34 GB) copied, 49.7106 s, 691 MB/s

2.6.32 disk write performance (RAID5 with 8 disks):

i7test7% dd if=/dev/zero of=/i7raid/bill/testfile1 bs=1M count=32768
32768+0 records in
32768+0 records out
34359738368 bytes (34 GB) copied, 100.395 s, 342 MB/s

A lot of memory will help because more will be kept in memory before flushing, giving the function in question more dirty pages to scan.  Tuning the dirty ratio to hold more dirty pages in cache may make it more obvious, too.

-Eric
Comment 5 Aristeu Rozanski 2010-12-16 15:53:32 EST
Patch(es) available on kernel-2.6.32-92.el6
Comment 8 Eryu Guan 2011-03-25 06:06:37 EDT
Start a long buffered IO on a host with 64G memory
On -71 kernel I got 34359738368 bytes (34 GB) copied, 564.336 s, 60.9 MB/s

[root@gs-dl585g2-01 home]# uname -a
Linux gs-dl585g2-01.rhts.eng.bos.redhat.com 2.6.32-71.el6.x86_64 #1 SMP Wed Sep 1 01:33:01 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
[root@gs-dl585g2-01 home]# free -m
             total       used       free     shared    buffers     cached
Mem:         64429       1540      62888          0         21        157
-/+ buffers/cache:       1362      63067
Swap:        66607          0      66607
[root@gs-dl585g2-01 home]# pwd
/home
[root@gs-dl585g2-01 home]# mount | grep home
/dev/mapper/vg_gsdl585g201-lv_home on /home type ext4 (rw)
[root@gs-dl585g2-01 home]# time dd if=/dev/zero of=./testfile bs=1M count=32768
32768+0 records in
32768+0 records out
34359738368 bytes (34 GB) copied, 564.336 s, 60.9 MB/s

real    9m24.358s
user    0m0.094s
sys     2m19.108s



On -122 kernel the performance is much better. I got 34359738368 bytes (34 GB) copied, 164.287 s, 209 MB/s

[root@gs-dl585g2-01 home]# time dd if=/dev/zero of=./testfile bs=1M count=32768
32768+0 records in
32768+0 records out
34359738368 bytes (34 GB) copied, 164.287 s, 209 MB/s

real    2m44.317s
user    0m0.074s
sys     2m19.533s

I tried on another host with 16G memory and got similar results
On -71 kernel 34359738368 bytes (34 GB) copied, 329.049 s, 104 MB/s
On -122 kernel 34359738368 bytes (34 GB) copied, 273.56 s, 126 MB/s

Set it to VERIFIED
Comment 9 errata-xmlrpc 2011-05-23 16:27:54 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2011-0542.html

Note You need to log in before you can comment on or make changes to this bug.