Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
For bugs related to Red Hat Enterprise Linux 3 product line. The current stable release is 3.9. For Red Hat Enterprise Linux 6 and above, please visit Red Hat JIRA https://issues.redhat.com/secure/CreateIssue!default.jspa?pid=12332745 to report new issues.

Bug 136398

Summary: NFS direct reads don't flush dirty cached pages
Product: Red Hat Enterprise Linux 3 Reporter: Chuck Lever <cel>
Component: kernelAssignee: Steve Dickson <steved>
Status: CLOSED ERRATA QA Contact: Brian Brock <bbrock>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.0CC: davej, k.georgiou, petrides, riel, steved
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2005-05-18 13:28:18 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
proposed patch none

Description Chuck Lever 2004-10-19 19:03:35 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.2)
Gecko/20040803

Description of problem:
The NFS direct I/O patch applied to RHEL 3.0 update 3 has a minor bug
that was found during testing with fsx-odirect.  The fix is in
function nfs_file_direct_IO in fs/nfs/direct.c:

        retval = filemap_fdatasync(mapping);
        if (retval == 0)
-               retval = fsync_inode_data_buffers(inode);
+               retval = nfs_wb_all(inode);
        if (retval == 0)
                retval = filemap_fdatawait(mapping); 

fsync_inode_data_buffers() comes from the normal direct I/O path, but
since NFS doesn't use inode data buffers, it is a no-op in the NFS
direct I/O path.  nfs_wb_all is the correct function to call here.

Note that there are no real applications I'm aware of that access a
file via direct I/O and writable map simultaneously.

Version-Release number of selected component (if applicable):
kernel-2.4.21-20.EL

How reproducible:
Always

Steps to Reproduce:
1. mmap a file in NFS, and open it with O_DIRECT
2. dirty the map with just a few bytes, then read from the O_DIRECT
file descriptor in quick succession

Actual Results:  I did this with fsx-odirect.  It dirtied 92KB of the
map, and the NFS client flushed only 64KB of the dirty pages before
the direct read operation. 

Expected Results:  All 92KB should have been written to the server
before the direct read operation.

Additional info:

The problem occurs because nfs_strategy only pushes writes out in
"wsize" chunks.  nfs_wb_all pushes everything no matter how many NFS
write operations it takes.

Comment 1 Steve Dickson 2004-10-19 19:46:38 UTC
Created attachment 105463 [details]
proposed patch

I'm hopeful that this will make into RHEL3-U5

Comment 2 Ernie Petrides 2004-11-16 02:25:36 UTC
A fix for this problem has just been committed to the RHEL3 U5
patch pool this evening (in kernel version 2.4.21-25.1.EL).


Comment 3 Tim Powers 2005-05-18 13:28:18 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2005-294.html