Bug 865305 - fuse: backport scatter-gather direct IO [rhel-6.3.z]
Summary: fuse: backport scatter-gather direct IO [rhel-6.3.z]
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel
Version: 6.4
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: rc
: ---
Assignee: Frantisek Hrbata
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On: 858850
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-10-11 08:00 UTC by RHEL Program Management
Modified: 2013-02-07 13:20 UTC (History)
15 users (show)

Fixed In Version: kernel-2.6.32-279.22.1.el6
Doc Type: Bug Fix
Doc Text:
Filesystem in Userspace (FUSE) did not implement scatter-gather direct I/O optimally. Consequently, the kernel had to process an extensive number of FUSE requests, which had a negative impact on system performance. This update applies a set of patches which improves internal request management for other features, such as readahead. FUSE direct I/O overhead has been significantly reduced to minimize negative effects on system performance.
Clone Of:
Environment:
Last Closed: 2013-02-05 19:56:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2013:0223 0 normal SHIPPED_LIVE Moderate: kernel security and bug fix update 2013-02-06 00:52:09 UTC

Description RHEL Program Management 2012-10-11 08:00:26 UTC
This bug has been copied from bug #858850 and has been proposed
to be backported to 6.3 z-stream (EUS).

Comment 8 Ben England 2013-01-21 15:06:21 UTC
Monson,

see comment in https://bugzilla.redhat.com/show_bug.cgi?id=858850#c18 which describes how to get the test program and data.  I didn't realize you had a separate bz for this.  I think I can test the kernel for you now that I know where to find it.  You can too if you have 10-GbE and some storage with throughput similar to 10-GbE.  We really want this patch because it makes a huge difference, see the article at 

http://perf1.perf.lab.eng.bos.redhat.com/bengland/laptop/matte/virt/rhev-rhs-brief/rhev-rhs-single-host-perfbrief-v1.0.odt

Best Regards, -ben

Comment 10 Ben England 2013-01-23 03:51:27 UTC
so which rhel6.4 kernel should I test with?  We'll try the RHEL6.3 kernel ASAP.  I pulled RPMs in #c9 from Brew.

Comment 11 Monson Shao 2013-01-23 05:32:58 UTC
(In reply to comment #10)
> so which rhel6.4 kernel should I test with?  We'll try the RHEL6.3 kernel
> ASAP.  I pulled RPMs in #c9 from Brew.

The rhel6.4 kernel has been tested in Bug 858850 .
So what you only need to do is pasting rhel6.3 kernel test result here, and I will do the rest. Thanks.

Comment 12 Ben England 2013-01-24 04:39:59 UTC
I retested with above kernel 2.6.32-279.22.1.el6.x86_64 and 2.6.32-279.el6.x86_64 (RHEL6.3) and again I see a huge increase in throughput particularly with iozone reads from 1 or more guests inside the KVM host.  Data and graphs are at:

http://perf1.perf.lab.eng.bos.redhat.com/bengland/public/rhs/virt/sg-patch-retest-2013-01-21.ods

Let me know when this patch can be obtained via RHN updates to RHEL6.3.

Comment 15 errata-xmlrpc 2013-02-05 19:56:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-0223.html


Note You need to log in before you can comment on or make changes to this bug.