Bug 450922 - very sub-optimal default readahead settings on lvm device
very sub-optimal default readahead settings on lvm device
Product: Fedora
Classification: Fedora
Component: lvm2 (Show other bugs)
All Linux
low Severity medium
: ---
: ---
Assigned To: Milan Broz
Fedora Extras Quality Assurance
Depends On:
  Show dependency treegraph
Reported: 2008-06-11 14:35 EDT by John Ellson
Modified: 2013-02-28 23:06 EST (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2008-07-01 08:10:51 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description John Ellson 2008-06-11 14:35:20 EDT
Description of problem:
I saw some references to performance penalties of 20-30% recently for LVM, and
wondered what I was getting from running LVM over 4-way RAID-0.   I was
horrified to discovery a 50% penalty!   Googling around turned up this known fix
that restores just about all of the performance (as measured by hdparm -t):

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. hdparm -t /dev/mapper/VolGroup00-LogVol01
2. blockdev --setra 8192 /dev/mapper/VolGroup00-LogVol01
Actual results:

$ blockdev --getra /dev/mapper/VolGroup00-LogVol00

$ hdparm -t /dev/mapper/VolGroup00-LogVol01
 Timing buffered disk reads:  374 MB in  3.01 seconds = 124.21 MB/sec

$ blockdev --setra 8192 /dev/mapper/VolGroup00-LogVol01

$ hdparm -t /dev/mapper/VolGroup00-LogVol01
 Timing buffered disk reads:  734 MB in  3.00 seconds = 244.57 MB/sec

Expected results:
No big performance penalties for LVM, at least not without big red flags to the

Additional info:
Now I'm wondering why I'm using LVM over raid0 ?   There is no way I'm going
to extend the logical partition to another device (4 disks).   Perhaps the right
solution for me is to drop LVM and run my file system directly on /dev/md0.
Comment 1 John Ellson 2008-06-11 14:43:13 EDT
Perhaps hdparm is misleading?    I get essentially no increase in performance
for my make jobs with this readahead change.
Comment 2 Milan Broz 2008-07-01 08:10:51 EDT
- hdaparm runs just synchronous reads of 2MB blocks, basically it should return
similar values like
blockdev --flushbufs $DEV ; dd iflag=sync if=$DEV of=/dev/null bs=2048k count=100

- readhead setting is now properly set for striped LVs (RADI0) in lvm2
(valuses should be similar to MD subsystem)

# lvcreate -i4 -L 100G -n lv_s2 vg_test
  Using default stripesize 64.00 KB
  Logical volume "lv_s2" created
# lvs -o +devices
  LV    VG      Attr   LSize   Origin Snap%  Move Log Copy%  Convert Devices
  lv_s2 vg_test -wi-a- 100.00G                                      

# blockdev --getra /dev/sdb
# blockdev --getra /dev/vg_test/lv_s2

- there are still some issues if stacking devices (lvm over md), see bug 232843

Note You need to log in before you can comment on or make changes to this bug.