Bug 173147 - Data pages are not being flushed properly when using -o sync mount option
Data pages are not being flushed properly when using -o sync mount option
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: gfs (Show other bugs)
All Linux
medium Severity medium
: ---
: ---
Assigned To: Wendy Cheng
GFS Bugs
Depends On:
Blocks: 164915
  Show dependency treegraph
Reported: 2005-11-14 12:07 EST by Kiersten (Kerri) Anderson
Modified: 2010-01-11 22:08 EST (History)
2 users (show)

See Also:
Fixed In Version: RHBA-2006-0169
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2006-01-06 15:20:20 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Kiersten (Kerri) Anderson 2005-11-14 12:07:24 EST
Description of problem:
There is a large performance hit when using the -o sync mount option on gfs
filesystems.  The problem is that pages are not being flushed to disk when the
gfs_writepage routine is accessed due to the transaction not yet being
completed.  The pages are currently only getting flushed when pdflush is executed.

Version-Release number of selected component (if applicable):

How reproducible:
Mount the gfs filesystem with the -o sync option.
Do a cp of a 1 GB file to the mounted filesystem.  This will take on the order
of hours to complete.

Steps to Reproduce:
1. mount -t gfs -o sync /dev/mapper/VolCluster0 /mnt/gfs0
2. dd if=/dev/zero of=/tmp/bigfile bs=1024k count=1000
3. time cp /tmp/bigfile /mnt/gfs0
Actual results:
Command takes 6+ hours to complete.

Expected results:
Command should take a few minutes depending on process and I/O system.

Additional info:  Propose Patch
RCS file: /cvs/cluster/cluster/gfs-kernel/src/gfs/ops_file.c,v
retrieving revision 1.18
diff -u -r1.18 ops_file.c
--- gfs-kernel/src/gfs/ops_file.c       4 Mar 2005 00:59:13 -0000       1.18
+++ gfs-kernel/src/gfs/ops_file.c       14 Nov 2005 17:07:45 -0000
@@ -815,8 +815,14 @@


-       if (file->f_flags & O_SYNC)
+       if (file->f_flags & O_SYNC || IS_SYNC(inode)) {
+               error = filemap_fdatawrite(ip->i_gl);
+               if (error == 0)
+                       error = filemap_fdatawait(file->f_mapping);
+               if (error)
+                       goto fail_ipres;
+       }

        if (alloc_required) {
                gfs_assert_warn(sdp, count != size ||
Comment 2 Wendy Cheng 2005-11-17 14:58:52 EST
Test result:

Without the patch:
[root@cluster1 gfs]# time dd if=/dev/zero of=/mnt/gfs1/bigfile bs=1024k count=1000
real    1m34.196s
user    0m0.010s
sys     1m29.660s

With the patch:
[root@cluster1 gfs]# time dd if=/dev/zero of=/mnt/gfs1/bigfile bs=1024k count=1000
real    0m20.875s
user    0m0.001s
sys     0m4.449s

Comment 3 Wendy Cheng 2005-11-17 15:01:21 EST
Just in case someone wants to build the GFS module without formal GFS RPMs, the
above patch had a typo:

error = filemap_fdatawrite(ip->i_gl); should have been
error = filemap_fdatawrite(file->f_mapping);
Comment 4 Wendy Cheng 2005-11-17 15:19:20 EST
Checked into CVS. 
Comment 6 Red Hat Bugzilla 2006-01-06 15:20:20 EST
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.