Bug 648632

Summary: ext4: writeback performance fixes
Product: Red Hat Enterprise Linux 6 Reporter: Eric Sandeen <esandeen>
Component: kernelAssignee: Eric Sandeen <esandeen>
Status: CLOSED ERRATA QA Contact: Eryu Guan <eguan>
Severity: medium Docs Contact:
Priority: low    
Version: 6.1CC: eguan, kzhang
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: kernel-2.6.32-92.el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2011-05-23 20:27:54 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Eric Sandeen 2010-11-01 19:36:36 UTC
Description of problem:

2 bugs in current ext4 writeback are solved with upstream patches:

From: Eric Sandeen <sandeen>
Date: Thu, 28 Oct 2010 01:30:03 +0000 (-0400)
Subject: ext4: don't bump up LONG_MAX nr_to_write by a factor of 8
X-Git-Tag: v2.6.37-rc1~76^2^2~44
X-Git-Url: http://git.kernel.org/?p=linux%2Fkernel%2Fgit%2Ftorvalds%2Flinux-2.6.git;a=commitdiff_plain;h=b443e7339aa08574d30b0819b344618459c76214

ext4: don't bump up LONG_MAX nr_to_write by a factor of 8

I'm uneasy with lots of stuff going on in ext4_da_writepages(),
but bumping nr_to_write from LLONG_MAX to -8 clearly isn't
making anything better, so avoid the multiplier in that case.

Signed-off-by: Eric Sandeen <sandeen>
Signed-off-by: "Theodore Ts'o" <tytso>
---

From: Eric Sandeen <sandeen>
Date: Thu, 28 Oct 2010 01:30:03 +0000 (-0400)
Subject: ext4: stop looping in ext4_num_dirty_pages when max_pages reached
X-Git-Tag: v2.6.37-rc1~76^2^2~45
X-Git-Url: http://git.kernel.org/?p=linux%2Fkernel%2Fgit%2Ftorvalds%2Flinux-2.6.git;a=commitdiff_plain;h=659c6009ca2e3a01acc9881bafe5f55ef09c965b

ext4: stop looping in ext4_num_dirty_pages when max_pages reached

Today we simply break out of the inner loop when we have accumulated
max_pages; this keeps scanning forward and doing pagevec_lookup_tag()
in the while (!done) loop, this does potentially a lot of work
with no net effect.

When we have accumulated max_pages, just clean up and return.

Signed-off-by: Eric Sandeen <sandeen>
Signed-off-by: "Theodore Ts'o" <tytso>
---

The 2nd commit was reported as a "50% performance regression" on the list for a user doing very large streaming buffered IOs on a machine with a lot of memory. The time spent scanning was significant.

Comment 1 RHEL Program Management 2010-11-01 19:39:53 UTC
This request was evaluated by Red Hat Product Management for inclusion
in a Red Hat Enterprise Linux maintenance release. Product Management has 
requested further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed 
products. This request is not yet committed for inclusion in an Update release.

Comment 3 Zhang Kexin 2010-11-23 02:24:28 UTC
Hi Eric,
Any guidance on how to reproduce this bug?
Thanks!

Comment 4 Eric Sandeen 2010-12-03 15:30:30 UTC
On a box with a lot of memory, try doing a very long buffered IO.

The original reporter did:

2.6.31 disk write performance (RAID5 with 8 disks):

i7test7% dd if=/dev/zero of=/i7raid/bill/testfile1 bs=1M count=32768
32768+0 records in
32768+0 records out
34359738368 bytes (34 GB) copied, 49.7106 s, 691 MB/s

2.6.32 disk write performance (RAID5 with 8 disks):

i7test7% dd if=/dev/zero of=/i7raid/bill/testfile1 bs=1M count=32768
32768+0 records in
32768+0 records out
34359738368 bytes (34 GB) copied, 100.395 s, 342 MB/s

A lot of memory will help because more will be kept in memory before flushing, giving the function in question more dirty pages to scan.  Tuning the dirty ratio to hold more dirty pages in cache may make it more obvious, too.

-Eric

Comment 5 Aristeu Rozanski 2010-12-16 20:53:32 UTC
Patch(es) available on kernel-2.6.32-92.el6

Comment 8 Eryu Guan 2011-03-25 10:06:37 UTC
Start a long buffered IO on a host with 64G memory
On -71 kernel I got 34359738368 bytes (34 GB) copied, 564.336 s, 60.9 MB/s

[root@gs-dl585g2-01 home]# uname -a
Linux gs-dl585g2-01.rhts.eng.bos.redhat.com 2.6.32-71.el6.x86_64 #1 SMP Wed Sep 1 01:33:01 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
[root@gs-dl585g2-01 home]# free -m
             total       used       free     shared    buffers     cached
Mem:         64429       1540      62888          0         21        157
-/+ buffers/cache:       1362      63067
Swap:        66607          0      66607
[root@gs-dl585g2-01 home]# pwd
/home
[root@gs-dl585g2-01 home]# mount | grep home
/dev/mapper/vg_gsdl585g201-lv_home on /home type ext4 (rw)
[root@gs-dl585g2-01 home]# time dd if=/dev/zero of=./testfile bs=1M count=32768
32768+0 records in
32768+0 records out
34359738368 bytes (34 GB) copied, 564.336 s, 60.9 MB/s

real    9m24.358s
user    0m0.094s
sys     2m19.108s



On -122 kernel the performance is much better. I got 34359738368 bytes (34 GB) copied, 164.287 s, 209 MB/s

[root@gs-dl585g2-01 home]# time dd if=/dev/zero of=./testfile bs=1M count=32768
32768+0 records in
32768+0 records out
34359738368 bytes (34 GB) copied, 164.287 s, 209 MB/s

real    2m44.317s
user    0m0.074s
sys     2m19.533s

I tried on another host with 16G memory and got similar results
On -71 kernel 34359738368 bytes (34 GB) copied, 329.049 s, 104 MB/s
On -122 kernel 34359738368 bytes (34 GB) copied, 273.56 s, 126 MB/s

Set it to VERIFIED

Comment 9 errata-xmlrpc 2011-05-23 20:27:54 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2011-0542.html