Bug 865305

Summary: fuse: backport scatter-gather direct IO [rhel-6.3.z]
Product: Red Hat Enterprise Linux 6 Reporter: RHEL Program Management <pm-rhel>
Component: kernelAssignee: Frantisek Hrbata <fhrbata>
Status: CLOSED ERRATA QA Contact: Red Hat Kernel QE team <kernel-qe>
Severity: high Docs Contact:
Priority: urgent    
Version: 6.4CC: bdonahue, bengland, bfoster, dhoward, esandeen, jpallich, jshao, kzhang, msvoboda, perfbz, pm-eus, rwheeler, sforsber, shaines, vbellur
Target Milestone: rcKeywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: kernel-2.6.32-279.22.1.el6 Doc Type: Bug Fix
Doc Text:
Filesystem in Userspace (FUSE) did not implement scatter-gather direct I/O optimally. Consequently, the kernel had to process an extensive number of FUSE requests, which had a negative impact on system performance. This update applies a set of patches which improves internal request management for other features, such as readahead. FUSE direct I/O overhead has been significantly reduced to minimize negative effects on system performance.
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-02-05 19:56:12 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 858850    
Bug Blocks:    

Description RHEL Program Management 2012-10-11 08:00:26 UTC
This bug has been copied from bug #858850 and has been proposed
to be backported to 6.3 z-stream (EUS).

Comment 8 Ben England 2013-01-21 15:06:21 UTC
Monson,

see comment in https://bugzilla.redhat.com/show_bug.cgi?id=858850#c18 which describes how to get the test program and data.  I didn't realize you had a separate bz for this.  I think I can test the kernel for you now that I know where to find it.  You can too if you have 10-GbE and some storage with throughput similar to 10-GbE.  We really want this patch because it makes a huge difference, see the article at 

http://perf1.perf.lab.eng.bos.redhat.com/bengland/laptop/matte/virt/rhev-rhs-brief/rhev-rhs-single-host-perfbrief-v1.0.odt

Best Regards, -ben

Comment 10 Ben England 2013-01-23 03:51:27 UTC
so which rhel6.4 kernel should I test with?  We'll try the RHEL6.3 kernel ASAP.  I pulled RPMs in #c9 from Brew.

Comment 11 Monson Shao 2013-01-23 05:32:58 UTC
(In reply to comment #10)
> so which rhel6.4 kernel should I test with?  We'll try the RHEL6.3 kernel
> ASAP.  I pulled RPMs in #c9 from Brew.

The rhel6.4 kernel has been tested in Bug 858850 .
So what you only need to do is pasting rhel6.3 kernel test result here, and I will do the rest. Thanks.

Comment 12 Ben England 2013-01-24 04:39:59 UTC
I retested with above kernel 2.6.32-279.22.1.el6.x86_64 and 2.6.32-279.el6.x86_64 (RHEL6.3) and again I see a huge increase in throughput particularly with iozone reads from 1 or more guests inside the KVM host.  Data and graphs are at:

http://perf1.perf.lab.eng.bos.redhat.com/bengland/public/rhs/virt/sg-patch-retest-2013-01-21.ods

Let me know when this patch can be obtained via RHN updates to RHEL6.3.

Comment 15 errata-xmlrpc 2013-02-05 19:56:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-0223.html