Bug 508338 - Improving performance relating to random i/o on ext3 filesystems
Improving performance relating to random i/o on ext3 filesystems
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel (Show other bugs)
x86_64 Linux
low Severity medium
: rc
: ---
Assigned To: Eric Sandeen
Red Hat Kernel QE team
Depends On:
  Show dependency treegraph
Reported: 2009-06-26 12:27 EDT by don frederick
Modified: 2013-02-25 14:05 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2013-02-25 14:05:56 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description don frederick 2009-06-26 12:27:18 EDT
User-Agent:       Mozilla/5.0 (X11; U; Linux i686; en-US; rv: Gecko/2009042320 Red Hat/3.0.10-1.el5 Firefox/3.0.10

The issue is slow access times for our 99% random block levelIO.  Each read requires reading in multiple inodes before the data block when inodes are not cached.  We can't seem to keep inodes in cache even with the vfs_cache_pressure=0 setting.  

These poor IO response times are noticeable in our application.  We ran the exact same test with the same SAN and a server with a different OS and filesystem and saw good IO response times. 

Reproducible: Always
Comment 1 Ondrej Vasik 2009-07-07 08:41:46 EDT
I guess that's something for kernel guys, not for basic system directory layout package. Reassigning.
Comment 2 Eric Sandeen 2009-08-18 09:38:50 EDT
Got a testcase for this or a more complete description?   Is this multiple random reads within a single file, or (sounds more likely) randomly reading files scattered around a filesystem?
Comment 3 Eric Sandeen 2013-02-25 14:05:56 EST
No response to the question after 3.5 years, I guess it's not terribly critical.

Feel free to re-open w/ more info if needed.

Note You need to log in before you can comment on or make changes to this bug.