Bug 191055 - Small write and gfs_fsync performance issue
Summary: Small write and gfs_fsync performance issue
Alias: None
Product: Red Hat Cluster Suite
Classification: Retired
Component: gfs (Show other bugs)
(Show other bugs)
Version: 3
Hardware: All Linux
Target Milestone: ---
Assignee: Kiersten (Kerri) Anderson
QA Contact: GFS Bugs
Depends On:
TreeView+ depends on / blocked
Reported: 2006-05-08 15:30 UTC by Kiersten (Kerri) Anderson
Modified: 2010-01-12 03:11 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2008-02-14 16:49:01 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

Description Wendy Cheng 2006-05-08 15:30:14 UTC
+++ This bug was initially created as a clone of Bug #190950 +++

Description of problem:

Customer supplies a test program that is said to well represent 
their running environment. Other than 10x less bandwidth (from 
application point of view) when compared with EXT3, GFS also 
generates 200MB of disk I/O for 8MB of application data vs. 
EXT3's 38MB of disk I/O.

The test program does the following:

 1. Create an 8M of temp file and sequentially write to it.
 2. Enable its own pthread mutex locks
 3. Start the timer
 4. Loop 8192 times that
        1. "write" 1024 byte of data into random offset
        2. "fdatasync", followed by "fsync" after every write
 5. Close the file
 6. Stop the timer
 7. Calculate bandwidth and latency based on time statistics
    collected between step 3 to 6.

The program is capable of doing multi-thread run (and many other 
features) but we're focusing on the scenario from step 1 to 7 
using one single thread.

Version-Release number of selected component (if applicable):
GFS 6.1

How reproducible:
Each ime and every time

Steps to Reproduce:
Actual results:

Expected results:

Additional info:

Note You need to log in before you can comment on or make changes to this bug.